00:00:00.000 Started by upstream project "autotest-per-patch" build number 132705 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.095 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.096 The recommended git tool is: git 00:00:00.096 using credential 00000000-0000-0000-0000-000000000002 00:00:00.098 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.130 Fetching changes from the remote Git repository 00:00:00.133 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.183 Using shallow fetch with depth 1 00:00:00.183 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.183 > git --version # timeout=10 00:00:00.220 > git --version # 'git version 2.39.2' 00:00:00.220 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.249 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.249 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.195 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.206 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.217 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.217 > git config core.sparsecheckout # timeout=10 00:00:07.230 > git read-tree -mu HEAD # timeout=10 00:00:07.246 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.271 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.272 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.381 [Pipeline] Start of Pipeline 00:00:07.393 [Pipeline] library 00:00:07.395 Loading library shm_lib@master 00:00:07.395 Library shm_lib@master is cached. Copying from home. 00:00:07.409 [Pipeline] node 00:00:07.420 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.422 [Pipeline] { 00:00:07.431 [Pipeline] catchError 00:00:07.432 [Pipeline] { 00:00:07.444 [Pipeline] wrap 00:00:07.453 [Pipeline] { 00:00:07.461 [Pipeline] stage 00:00:07.463 [Pipeline] { (Prologue) 00:00:07.650 [Pipeline] sh 00:00:07.939 + logger -p user.info -t JENKINS-CI 00:00:07.954 [Pipeline] echo 00:00:07.955 Node: CYP12 00:00:07.963 [Pipeline] sh 00:00:08.261 [Pipeline] setCustomBuildProperty 00:00:08.272 [Pipeline] echo 00:00:08.274 Cleanup processes 00:00:08.280 [Pipeline] sh 00:00:08.570 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.570 1753604 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.584 [Pipeline] sh 00:00:08.871 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.871 ++ grep -v 'sudo pgrep' 00:00:08.872 ++ awk '{print $1}' 00:00:08.872 + sudo kill -9 00:00:08.872 + true 00:00:08.895 [Pipeline] cleanWs 00:00:08.905 [WS-CLEANUP] Deleting project workspace... 00:00:08.905 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.910 [WS-CLEANUP] done 00:00:08.913 [Pipeline] setCustomBuildProperty 00:00:08.923 [Pipeline] sh 00:00:09.208 + sudo git config --global --replace-all safe.directory '*' 00:00:09.333 [Pipeline] httpRequest 00:00:09.842 [Pipeline] echo 00:00:09.843 Sorcerer 10.211.164.20 is alive 00:00:09.849 [Pipeline] retry 00:00:09.851 [Pipeline] { 00:00:09.859 [Pipeline] httpRequest 00:00:09.863 HttpMethod: GET 00:00:09.864 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.864 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.884 Response Code: HTTP/1.1 200 OK 00:00:09.884 Success: Status code 200 is in the accepted range: 200,404 00:00:09.884 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:36.707 [Pipeline] } 00:00:36.725 [Pipeline] // retry 00:00:36.733 [Pipeline] sh 00:00:37.021 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:37.040 [Pipeline] httpRequest 00:00:37.426 [Pipeline] echo 00:00:37.428 Sorcerer 10.211.164.20 is alive 00:00:37.438 [Pipeline] retry 00:00:37.440 [Pipeline] { 00:00:37.455 [Pipeline] httpRequest 00:00:37.460 HttpMethod: GET 00:00:37.460 URL: http://10.211.164.20/packages/spdk_a333974e53dcb7e60b097445a793459e0a17216f.tar.gz 00:00:37.460 Sending request to url: http://10.211.164.20/packages/spdk_a333974e53dcb7e60b097445a793459e0a17216f.tar.gz 00:00:37.466 Response Code: HTTP/1.1 200 OK 00:00:37.467 Success: Status code 200 is in the accepted range: 200,404 00:00:37.467 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a333974e53dcb7e60b097445a793459e0a17216f.tar.gz 00:02:17.758 [Pipeline] } 00:02:17.776 [Pipeline] // retry 00:02:17.784 [Pipeline] sh 00:02:18.075 + tar --no-same-owner -xf spdk_a333974e53dcb7e60b097445a793459e0a17216f.tar.gz 00:02:21.417 [Pipeline] sh 00:02:21.700 + git -C spdk log --oneline -n5 00:02:21.701 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:02:21.701 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:02:21.701 e2dfdf06c accel/mlx5: Register post_poller handler 00:02:21.701 3c8001115 accel/mlx5: More precise condition to update DB 00:02:21.701 98eca6fa0 lib/thread: Add API to register a post poller handler 00:02:21.711 [Pipeline] } 00:02:21.722 [Pipeline] // stage 00:02:21.730 [Pipeline] stage 00:02:21.732 [Pipeline] { (Prepare) 00:02:21.747 [Pipeline] writeFile 00:02:21.764 [Pipeline] sh 00:02:22.048 + logger -p user.info -t JENKINS-CI 00:02:22.060 [Pipeline] sh 00:02:22.345 + logger -p user.info -t JENKINS-CI 00:02:22.356 [Pipeline] sh 00:02:22.638 + cat autorun-spdk.conf 00:02:22.638 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.638 SPDK_TEST_NVMF=1 00:02:22.638 SPDK_TEST_NVME_CLI=1 00:02:22.638 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:22.638 SPDK_TEST_NVMF_NICS=e810 00:02:22.638 SPDK_TEST_VFIOUSER=1 00:02:22.638 SPDK_RUN_UBSAN=1 00:02:22.638 NET_TYPE=phy 00:02:22.645 RUN_NIGHTLY=0 00:02:22.650 [Pipeline] readFile 00:02:22.674 [Pipeline] withEnv 00:02:22.677 [Pipeline] { 00:02:22.689 [Pipeline] sh 00:02:22.972 + set -ex 00:02:22.972 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:22.972 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:22.972 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.972 ++ SPDK_TEST_NVMF=1 00:02:22.972 ++ SPDK_TEST_NVME_CLI=1 00:02:22.972 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:22.972 ++ SPDK_TEST_NVMF_NICS=e810 00:02:22.972 ++ SPDK_TEST_VFIOUSER=1 00:02:22.972 ++ SPDK_RUN_UBSAN=1 00:02:22.972 ++ NET_TYPE=phy 00:02:22.972 ++ RUN_NIGHTLY=0 00:02:22.972 + case $SPDK_TEST_NVMF_NICS in 00:02:22.972 + DRIVERS=ice 00:02:22.972 + [[ tcp == \r\d\m\a ]] 00:02:22.972 + [[ -n ice ]] 00:02:22.972 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:22.972 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:22.972 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:22.972 rmmod: ERROR: Module irdma is not currently loaded 00:02:22.972 rmmod: ERROR: Module i40iw is not currently loaded 00:02:22.972 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:22.972 + true 00:02:22.972 + for D in $DRIVERS 00:02:22.972 + sudo modprobe ice 00:02:22.972 + exit 0 00:02:22.981 [Pipeline] } 00:02:22.993 [Pipeline] // withEnv 00:02:22.998 [Pipeline] } 00:02:23.012 [Pipeline] // stage 00:02:23.019 [Pipeline] catchError 00:02:23.021 [Pipeline] { 00:02:23.032 [Pipeline] timeout 00:02:23.032 Timeout set to expire in 1 hr 0 min 00:02:23.034 [Pipeline] { 00:02:23.048 [Pipeline] stage 00:02:23.050 [Pipeline] { (Tests) 00:02:23.064 [Pipeline] sh 00:02:23.349 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:23.349 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:23.349 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:23.349 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:23.349 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:23.349 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:23.349 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:23.349 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:23.349 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:23.349 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:23.349 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:23.349 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:23.349 + source /etc/os-release 00:02:23.349 ++ NAME='Fedora Linux' 00:02:23.349 ++ VERSION='39 (Cloud Edition)' 00:02:23.349 ++ ID=fedora 00:02:23.349 ++ VERSION_ID=39 00:02:23.349 ++ VERSION_CODENAME= 00:02:23.349 ++ PLATFORM_ID=platform:f39 00:02:23.349 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:23.349 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:23.349 ++ LOGO=fedora-logo-icon 00:02:23.349 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:23.349 ++ HOME_URL=https://fedoraproject.org/ 00:02:23.349 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:23.349 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:23.349 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:23.349 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:23.349 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:23.349 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:23.349 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:23.349 ++ SUPPORT_END=2024-11-12 00:02:23.349 ++ VARIANT='Cloud Edition' 00:02:23.349 ++ VARIANT_ID=cloud 00:02:23.349 + uname -a 00:02:23.349 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:23.349 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:26.654 Hugepages 00:02:26.654 node hugesize free / total 00:02:26.654 node0 1048576kB 0 / 0 00:02:26.654 node0 2048kB 0 / 0 00:02:26.654 node1 1048576kB 0 / 0 00:02:26.654 node1 2048kB 0 / 0 00:02:26.654 00:02:26.654 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:26.654 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:26.654 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:26.654 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:26.654 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:26.655 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:26.655 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:26.655 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:26.655 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:26.655 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:26.655 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:26.655 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:26.655 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:26.655 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:26.655 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:26.655 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:26.655 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:26.655 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:26.655 + rm -f /tmp/spdk-ld-path 00:02:26.655 + source autorun-spdk.conf 00:02:26.655 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.655 ++ SPDK_TEST_NVMF=1 00:02:26.655 ++ SPDK_TEST_NVME_CLI=1 00:02:26.655 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:26.655 ++ SPDK_TEST_NVMF_NICS=e810 00:02:26.655 ++ SPDK_TEST_VFIOUSER=1 00:02:26.655 ++ SPDK_RUN_UBSAN=1 00:02:26.655 ++ NET_TYPE=phy 00:02:26.655 ++ RUN_NIGHTLY=0 00:02:26.655 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:26.655 + [[ -n '' ]] 00:02:26.655 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.655 + for M in /var/spdk/build-*-manifest.txt 00:02:26.655 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:26.655 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:26.655 + for M in /var/spdk/build-*-manifest.txt 00:02:26.655 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:26.655 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:26.655 + for M in /var/spdk/build-*-manifest.txt 00:02:26.655 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:26.655 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:26.655 ++ uname 00:02:26.655 + [[ Linux == \L\i\n\u\x ]] 00:02:26.655 + sudo dmesg -T 00:02:26.655 + sudo dmesg --clear 00:02:26.655 + dmesg_pid=1755260 00:02:26.655 + [[ Fedora Linux == FreeBSD ]] 00:02:26.655 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.655 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.655 + sudo dmesg -Tw 00:02:26.655 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:26.655 + [[ -x /usr/src/fio-static/fio ]] 00:02:26.655 + export FIO_BIN=/usr/src/fio-static/fio 00:02:26.655 + FIO_BIN=/usr/src/fio-static/fio 00:02:26.655 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:26.655 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:26.655 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:26.655 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.655 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.655 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:26.655 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.655 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.655 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:26.655 20:55:28 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:26.655 20:55:28 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:26.655 20:55:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.655 20:55:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:26.655 20:55:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:26.655 20:55:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:26.655 20:55:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:26.655 20:55:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:26.655 20:55:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:26.655 20:55:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:26.655 20:55:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:26.655 20:55:28 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:26.655 20:55:28 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:26.917 20:55:28 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:26.917 20:55:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:26.917 20:55:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:26.917 20:55:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:26.917 20:55:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.917 20:55:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.917 20:55:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.917 20:55:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.917 20:55:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.917 20:55:28 -- paths/export.sh@5 -- $ export PATH 00:02:26.917 20:55:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.917 20:55:28 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:26.917 20:55:28 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:26.917 20:55:28 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733428528.XXXXXX 00:02:26.917 20:55:28 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733428528.FTcsDF 00:02:26.917 20:55:28 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:26.917 20:55:28 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:26.917 20:55:28 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:26.917 20:55:28 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:26.917 20:55:28 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:26.917 20:55:28 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:26.917 20:55:28 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:26.917 20:55:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.917 20:55:28 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:26.917 20:55:28 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:26.917 20:55:28 -- pm/common@17 -- $ local monitor 00:02:26.917 20:55:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.917 20:55:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.917 20:55:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.917 20:55:28 -- pm/common@21 -- $ date +%s 00:02:26.917 20:55:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.917 20:55:28 -- pm/common@25 -- $ sleep 1 00:02:26.917 20:55:28 -- pm/common@21 -- $ date +%s 00:02:26.917 20:55:28 -- pm/common@21 -- $ date +%s 00:02:26.917 20:55:28 -- pm/common@21 -- $ date +%s 00:02:26.917 20:55:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733428528 00:02:26.917 20:55:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733428528 00:02:26.917 20:55:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733428528 00:02:26.917 20:55:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733428528 00:02:26.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733428528_collect-cpu-load.pm.log 00:02:26.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733428528_collect-vmstat.pm.log 00:02:26.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733428528_collect-cpu-temp.pm.log 00:02:26.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733428528_collect-bmc-pm.bmc.pm.log 00:02:27.861 20:55:29 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:27.861 20:55:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:27.861 20:55:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:27.861 20:55:29 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:27.861 20:55:29 -- spdk/autobuild.sh@16 -- $ date -u 00:02:27.861 Thu Dec 5 07:55:29 PM UTC 2024 00:02:27.861 20:55:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:27.861 v25.01-pre-302-ga333974e5 00:02:27.861 20:55:29 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:27.861 20:55:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:27.861 20:55:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:27.861 20:55:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:27.861 20:55:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:27.861 20:55:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.861 ************************************ 00:02:27.861 START TEST ubsan 00:02:27.861 ************************************ 00:02:27.861 20:55:29 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:27.861 using ubsan 00:02:27.861 00:02:27.861 real 0m0.001s 00:02:27.861 user 0m0.000s 00:02:27.861 sys 0m0.001s 00:02:27.861 20:55:29 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:27.861 20:55:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:27.861 ************************************ 00:02:27.861 END TEST ubsan 00:02:27.861 ************************************ 00:02:27.861 20:55:29 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:27.861 20:55:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:27.861 20:55:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:27.861 20:55:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:27.861 20:55:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:27.861 20:55:29 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:27.861 20:55:29 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:27.861 20:55:29 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:27.861 20:55:29 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:28.121 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:28.121 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:28.382 Using 'verbs' RDMA provider 00:02:44.234 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:56.475 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:56.475 Creating mk/config.mk...done. 00:02:56.475 Creating mk/cc.flags.mk...done. 00:02:56.475 Type 'make' to build. 00:02:56.475 20:55:57 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:56.475 20:55:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:56.475 20:55:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:56.475 20:55:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:56.475 ************************************ 00:02:56.475 START TEST make 00:02:56.475 ************************************ 00:02:56.475 20:55:57 make -- common/autotest_common.sh@1129 -- $ make -j144 00:02:57.048 make[1]: Nothing to be done for 'all'. 00:02:58.427 The Meson build system 00:02:58.427 Version: 1.5.0 00:02:58.427 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:58.427 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:58.427 Build type: native build 00:02:58.427 Project name: libvfio-user 00:02:58.427 Project version: 0.0.1 00:02:58.427 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:58.427 C linker for the host machine: cc ld.bfd 2.40-14 00:02:58.427 Host machine cpu family: x86_64 00:02:58.427 Host machine cpu: x86_64 00:02:58.427 Run-time dependency threads found: YES 00:02:58.427 Library dl found: YES 00:02:58.427 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:58.427 Run-time dependency json-c found: YES 0.17 00:02:58.427 Run-time dependency cmocka found: YES 1.1.7 00:02:58.427 Program pytest-3 found: NO 00:02:58.427 Program flake8 found: NO 00:02:58.427 Program misspell-fixer found: NO 00:02:58.427 Program restructuredtext-lint found: NO 00:02:58.427 Program valgrind found: YES (/usr/bin/valgrind) 00:02:58.427 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:58.427 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:58.427 Compiler for C supports arguments -Wwrite-strings: YES 00:02:58.427 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:58.427 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:58.427 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:58.427 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:58.427 Build targets in project: 8 00:02:58.427 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:58.427 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:58.427 00:02:58.427 libvfio-user 0.0.1 00:02:58.427 00:02:58.427 User defined options 00:02:58.427 buildtype : debug 00:02:58.427 default_library: shared 00:02:58.427 libdir : /usr/local/lib 00:02:58.427 00:02:58.427 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:58.427 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:58.686 [1/37] Compiling C object samples/null.p/null.c.o 00:02:58.686 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:58.686 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:58.686 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:58.686 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:58.686 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:58.686 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:58.686 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:58.686 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:58.686 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:58.687 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:58.687 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:58.687 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:58.687 [14/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:58.687 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:58.687 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:58.687 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:58.687 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:58.687 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:58.687 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:58.687 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:58.687 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:58.687 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:58.687 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:58.687 [25/37] Compiling C object samples/client.p/client.c.o 00:02:58.687 [26/37] Compiling C object samples/server.p/server.c.o 00:02:58.687 [27/37] Linking target samples/client 00:02:58.687 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:58.687 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:58.687 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:58.947 [31/37] Linking target test/unit_tests 00:02:58.947 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:58.947 [33/37] Linking target samples/gpio-pci-idio-16 00:02:58.947 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:58.947 [35/37] Linking target samples/server 00:02:58.947 [36/37] Linking target samples/null 00:02:58.947 [37/37] Linking target samples/lspci 00:02:58.947 INFO: autodetecting backend as ninja 00:02:58.947 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:58.947 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:59.521 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:59.521 ninja: no work to do. 00:03:06.102 The Meson build system 00:03:06.102 Version: 1.5.0 00:03:06.102 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:06.102 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:06.102 Build type: native build 00:03:06.102 Program cat found: YES (/usr/bin/cat) 00:03:06.102 Project name: DPDK 00:03:06.102 Project version: 24.03.0 00:03:06.102 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:06.102 C linker for the host machine: cc ld.bfd 2.40-14 00:03:06.102 Host machine cpu family: x86_64 00:03:06.102 Host machine cpu: x86_64 00:03:06.102 Message: ## Building in Developer Mode ## 00:03:06.102 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:06.102 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:06.102 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:06.102 Program python3 found: YES (/usr/bin/python3) 00:03:06.102 Program cat found: YES (/usr/bin/cat) 00:03:06.102 Compiler for C supports arguments -march=native: YES 00:03:06.102 Checking for size of "void *" : 8 00:03:06.102 Checking for size of "void *" : 8 (cached) 00:03:06.102 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:06.102 Library m found: YES 00:03:06.102 Library numa found: YES 00:03:06.102 Has header "numaif.h" : YES 00:03:06.102 Library fdt found: NO 00:03:06.102 Library execinfo found: NO 00:03:06.102 Has header "execinfo.h" : YES 00:03:06.102 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:06.102 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:06.102 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:06.102 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:06.102 Run-time dependency openssl found: YES 3.1.1 00:03:06.102 Run-time dependency libpcap found: YES 1.10.4 00:03:06.102 Has header "pcap.h" with dependency libpcap: YES 00:03:06.102 Compiler for C supports arguments -Wcast-qual: YES 00:03:06.102 Compiler for C supports arguments -Wdeprecated: YES 00:03:06.102 Compiler for C supports arguments -Wformat: YES 00:03:06.102 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:06.102 Compiler for C supports arguments -Wformat-security: NO 00:03:06.102 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:06.102 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:06.102 Compiler for C supports arguments -Wnested-externs: YES 00:03:06.102 Compiler for C supports arguments -Wold-style-definition: YES 00:03:06.102 Compiler for C supports arguments -Wpointer-arith: YES 00:03:06.102 Compiler for C supports arguments -Wsign-compare: YES 00:03:06.102 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:06.102 Compiler for C supports arguments -Wundef: YES 00:03:06.102 Compiler for C supports arguments -Wwrite-strings: YES 00:03:06.102 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:06.102 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:06.102 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:06.102 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:06.102 Program objdump found: YES (/usr/bin/objdump) 00:03:06.102 Compiler for C supports arguments -mavx512f: YES 00:03:06.102 Checking if "AVX512 checking" compiles: YES 00:03:06.102 Fetching value of define "__SSE4_2__" : 1 00:03:06.102 Fetching value of define "__AES__" : 1 00:03:06.102 Fetching value of define "__AVX__" : 1 00:03:06.102 Fetching value of define "__AVX2__" : 1 00:03:06.102 Fetching value of define "__AVX512BW__" : 1 00:03:06.102 Fetching value of define "__AVX512CD__" : 1 00:03:06.102 Fetching value of define "__AVX512DQ__" : 1 00:03:06.102 Fetching value of define "__AVX512F__" : 1 00:03:06.102 Fetching value of define "__AVX512VL__" : 1 00:03:06.102 Fetching value of define "__PCLMUL__" : 1 00:03:06.102 Fetching value of define "__RDRND__" : 1 00:03:06.102 Fetching value of define "__RDSEED__" : 1 00:03:06.102 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:06.102 Fetching value of define "__znver1__" : (undefined) 00:03:06.102 Fetching value of define "__znver2__" : (undefined) 00:03:06.102 Fetching value of define "__znver3__" : (undefined) 00:03:06.102 Fetching value of define "__znver4__" : (undefined) 00:03:06.102 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:06.102 Message: lib/log: Defining dependency "log" 00:03:06.102 Message: lib/kvargs: Defining dependency "kvargs" 00:03:06.102 Message: lib/telemetry: Defining dependency "telemetry" 00:03:06.102 Checking for function "getentropy" : NO 00:03:06.102 Message: lib/eal: Defining dependency "eal" 00:03:06.102 Message: lib/ring: Defining dependency "ring" 00:03:06.102 Message: lib/rcu: Defining dependency "rcu" 00:03:06.102 Message: lib/mempool: Defining dependency "mempool" 00:03:06.103 Message: lib/mbuf: Defining dependency "mbuf" 00:03:06.103 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:06.103 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:06.103 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:06.103 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:06.103 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:06.103 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:06.103 Compiler for C supports arguments -mpclmul: YES 00:03:06.103 Compiler for C supports arguments -maes: YES 00:03:06.103 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:06.103 Compiler for C supports arguments -mavx512bw: YES 00:03:06.103 Compiler for C supports arguments -mavx512dq: YES 00:03:06.103 Compiler for C supports arguments -mavx512vl: YES 00:03:06.103 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:06.103 Compiler for C supports arguments -mavx2: YES 00:03:06.103 Compiler for C supports arguments -mavx: YES 00:03:06.103 Message: lib/net: Defining dependency "net" 00:03:06.103 Message: lib/meter: Defining dependency "meter" 00:03:06.103 Message: lib/ethdev: Defining dependency "ethdev" 00:03:06.103 Message: lib/pci: Defining dependency "pci" 00:03:06.103 Message: lib/cmdline: Defining dependency "cmdline" 00:03:06.103 Message: lib/hash: Defining dependency "hash" 00:03:06.103 Message: lib/timer: Defining dependency "timer" 00:03:06.103 Message: lib/compressdev: Defining dependency "compressdev" 00:03:06.103 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:06.103 Message: lib/dmadev: Defining dependency "dmadev" 00:03:06.103 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:06.103 Message: lib/power: Defining dependency "power" 00:03:06.103 Message: lib/reorder: Defining dependency "reorder" 00:03:06.103 Message: lib/security: Defining dependency "security" 00:03:06.103 Has header "linux/userfaultfd.h" : YES 00:03:06.103 Has header "linux/vduse.h" : YES 00:03:06.103 Message: lib/vhost: Defining dependency "vhost" 00:03:06.103 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:06.103 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:06.103 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:06.103 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:06.103 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:06.103 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:06.103 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:06.103 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:06.103 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:06.103 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:06.103 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:06.103 Configuring doxy-api-html.conf using configuration 00:03:06.103 Configuring doxy-api-man.conf using configuration 00:03:06.103 Program mandb found: YES (/usr/bin/mandb) 00:03:06.103 Program sphinx-build found: NO 00:03:06.103 Configuring rte_build_config.h using configuration 00:03:06.103 Message: 00:03:06.103 ================= 00:03:06.103 Applications Enabled 00:03:06.103 ================= 00:03:06.103 00:03:06.103 apps: 00:03:06.103 00:03:06.103 00:03:06.103 Message: 00:03:06.103 ================= 00:03:06.103 Libraries Enabled 00:03:06.103 ================= 00:03:06.103 00:03:06.103 libs: 00:03:06.103 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:06.103 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:06.103 cryptodev, dmadev, power, reorder, security, vhost, 00:03:06.103 00:03:06.103 Message: 00:03:06.103 =============== 00:03:06.103 Drivers Enabled 00:03:06.103 =============== 00:03:06.103 00:03:06.103 common: 00:03:06.103 00:03:06.103 bus: 00:03:06.103 pci, vdev, 00:03:06.103 mempool: 00:03:06.103 ring, 00:03:06.103 dma: 00:03:06.103 00:03:06.103 net: 00:03:06.103 00:03:06.103 crypto: 00:03:06.103 00:03:06.103 compress: 00:03:06.103 00:03:06.103 vdpa: 00:03:06.103 00:03:06.103 00:03:06.103 Message: 00:03:06.103 ================= 00:03:06.103 Content Skipped 00:03:06.103 ================= 00:03:06.103 00:03:06.103 apps: 00:03:06.103 dumpcap: explicitly disabled via build config 00:03:06.103 graph: explicitly disabled via build config 00:03:06.103 pdump: explicitly disabled via build config 00:03:06.103 proc-info: explicitly disabled via build config 00:03:06.103 test-acl: explicitly disabled via build config 00:03:06.103 test-bbdev: explicitly disabled via build config 00:03:06.103 test-cmdline: explicitly disabled via build config 00:03:06.103 test-compress-perf: explicitly disabled via build config 00:03:06.103 test-crypto-perf: explicitly disabled via build config 00:03:06.103 test-dma-perf: explicitly disabled via build config 00:03:06.103 test-eventdev: explicitly disabled via build config 00:03:06.103 test-fib: explicitly disabled via build config 00:03:06.103 test-flow-perf: explicitly disabled via build config 00:03:06.103 test-gpudev: explicitly disabled via build config 00:03:06.103 test-mldev: explicitly disabled via build config 00:03:06.103 test-pipeline: explicitly disabled via build config 00:03:06.103 test-pmd: explicitly disabled via build config 00:03:06.103 test-regex: explicitly disabled via build config 00:03:06.103 test-sad: explicitly disabled via build config 00:03:06.103 test-security-perf: explicitly disabled via build config 00:03:06.103 00:03:06.103 libs: 00:03:06.103 argparse: explicitly disabled via build config 00:03:06.103 metrics: explicitly disabled via build config 00:03:06.103 acl: explicitly disabled via build config 00:03:06.103 bbdev: explicitly disabled via build config 00:03:06.103 bitratestats: explicitly disabled via build config 00:03:06.103 bpf: explicitly disabled via build config 00:03:06.103 cfgfile: explicitly disabled via build config 00:03:06.103 distributor: explicitly disabled via build config 00:03:06.103 efd: explicitly disabled via build config 00:03:06.103 eventdev: explicitly disabled via build config 00:03:06.103 dispatcher: explicitly disabled via build config 00:03:06.103 gpudev: explicitly disabled via build config 00:03:06.103 gro: explicitly disabled via build config 00:03:06.103 gso: explicitly disabled via build config 00:03:06.103 ip_frag: explicitly disabled via build config 00:03:06.103 jobstats: explicitly disabled via build config 00:03:06.103 latencystats: explicitly disabled via build config 00:03:06.103 lpm: explicitly disabled via build config 00:03:06.103 member: explicitly disabled via build config 00:03:06.103 pcapng: explicitly disabled via build config 00:03:06.103 rawdev: explicitly disabled via build config 00:03:06.103 regexdev: explicitly disabled via build config 00:03:06.103 mldev: explicitly disabled via build config 00:03:06.103 rib: explicitly disabled via build config 00:03:06.103 sched: explicitly disabled via build config 00:03:06.103 stack: explicitly disabled via build config 00:03:06.103 ipsec: explicitly disabled via build config 00:03:06.103 pdcp: explicitly disabled via build config 00:03:06.103 fib: explicitly disabled via build config 00:03:06.103 port: explicitly disabled via build config 00:03:06.103 pdump: explicitly disabled via build config 00:03:06.103 table: explicitly disabled via build config 00:03:06.103 pipeline: explicitly disabled via build config 00:03:06.103 graph: explicitly disabled via build config 00:03:06.103 node: explicitly disabled via build config 00:03:06.103 00:03:06.103 drivers: 00:03:06.103 common/cpt: not in enabled drivers build config 00:03:06.103 common/dpaax: not in enabled drivers build config 00:03:06.103 common/iavf: not in enabled drivers build config 00:03:06.103 common/idpf: not in enabled drivers build config 00:03:06.103 common/ionic: not in enabled drivers build config 00:03:06.103 common/mvep: not in enabled drivers build config 00:03:06.103 common/octeontx: not in enabled drivers build config 00:03:06.103 bus/auxiliary: not in enabled drivers build config 00:03:06.103 bus/cdx: not in enabled drivers build config 00:03:06.103 bus/dpaa: not in enabled drivers build config 00:03:06.103 bus/fslmc: not in enabled drivers build config 00:03:06.103 bus/ifpga: not in enabled drivers build config 00:03:06.103 bus/platform: not in enabled drivers build config 00:03:06.103 bus/uacce: not in enabled drivers build config 00:03:06.103 bus/vmbus: not in enabled drivers build config 00:03:06.103 common/cnxk: not in enabled drivers build config 00:03:06.103 common/mlx5: not in enabled drivers build config 00:03:06.103 common/nfp: not in enabled drivers build config 00:03:06.103 common/nitrox: not in enabled drivers build config 00:03:06.103 common/qat: not in enabled drivers build config 00:03:06.103 common/sfc_efx: not in enabled drivers build config 00:03:06.103 mempool/bucket: not in enabled drivers build config 00:03:06.103 mempool/cnxk: not in enabled drivers build config 00:03:06.103 mempool/dpaa: not in enabled drivers build config 00:03:06.103 mempool/dpaa2: not in enabled drivers build config 00:03:06.103 mempool/octeontx: not in enabled drivers build config 00:03:06.103 mempool/stack: not in enabled drivers build config 00:03:06.103 dma/cnxk: not in enabled drivers build config 00:03:06.103 dma/dpaa: not in enabled drivers build config 00:03:06.103 dma/dpaa2: not in enabled drivers build config 00:03:06.104 dma/hisilicon: not in enabled drivers build config 00:03:06.104 dma/idxd: not in enabled drivers build config 00:03:06.104 dma/ioat: not in enabled drivers build config 00:03:06.104 dma/skeleton: not in enabled drivers build config 00:03:06.104 net/af_packet: not in enabled drivers build config 00:03:06.104 net/af_xdp: not in enabled drivers build config 00:03:06.104 net/ark: not in enabled drivers build config 00:03:06.104 net/atlantic: not in enabled drivers build config 00:03:06.104 net/avp: not in enabled drivers build config 00:03:06.104 net/axgbe: not in enabled drivers build config 00:03:06.104 net/bnx2x: not in enabled drivers build config 00:03:06.104 net/bnxt: not in enabled drivers build config 00:03:06.104 net/bonding: not in enabled drivers build config 00:03:06.104 net/cnxk: not in enabled drivers build config 00:03:06.104 net/cpfl: not in enabled drivers build config 00:03:06.104 net/cxgbe: not in enabled drivers build config 00:03:06.104 net/dpaa: not in enabled drivers build config 00:03:06.104 net/dpaa2: not in enabled drivers build config 00:03:06.104 net/e1000: not in enabled drivers build config 00:03:06.104 net/ena: not in enabled drivers build config 00:03:06.104 net/enetc: not in enabled drivers build config 00:03:06.104 net/enetfec: not in enabled drivers build config 00:03:06.104 net/enic: not in enabled drivers build config 00:03:06.104 net/failsafe: not in enabled drivers build config 00:03:06.104 net/fm10k: not in enabled drivers build config 00:03:06.104 net/gve: not in enabled drivers build config 00:03:06.104 net/hinic: not in enabled drivers build config 00:03:06.104 net/hns3: not in enabled drivers build config 00:03:06.104 net/i40e: not in enabled drivers build config 00:03:06.104 net/iavf: not in enabled drivers build config 00:03:06.104 net/ice: not in enabled drivers build config 00:03:06.104 net/idpf: not in enabled drivers build config 00:03:06.104 net/igc: not in enabled drivers build config 00:03:06.104 net/ionic: not in enabled drivers build config 00:03:06.104 net/ipn3ke: not in enabled drivers build config 00:03:06.104 net/ixgbe: not in enabled drivers build config 00:03:06.104 net/mana: not in enabled drivers build config 00:03:06.104 net/memif: not in enabled drivers build config 00:03:06.104 net/mlx4: not in enabled drivers build config 00:03:06.104 net/mlx5: not in enabled drivers build config 00:03:06.104 net/mvneta: not in enabled drivers build config 00:03:06.104 net/mvpp2: not in enabled drivers build config 00:03:06.104 net/netvsc: not in enabled drivers build config 00:03:06.104 net/nfb: not in enabled drivers build config 00:03:06.104 net/nfp: not in enabled drivers build config 00:03:06.104 net/ngbe: not in enabled drivers build config 00:03:06.104 net/null: not in enabled drivers build config 00:03:06.104 net/octeontx: not in enabled drivers build config 00:03:06.104 net/octeon_ep: not in enabled drivers build config 00:03:06.104 net/pcap: not in enabled drivers build config 00:03:06.104 net/pfe: not in enabled drivers build config 00:03:06.104 net/qede: not in enabled drivers build config 00:03:06.104 net/ring: not in enabled drivers build config 00:03:06.104 net/sfc: not in enabled drivers build config 00:03:06.104 net/softnic: not in enabled drivers build config 00:03:06.104 net/tap: not in enabled drivers build config 00:03:06.104 net/thunderx: not in enabled drivers build config 00:03:06.104 net/txgbe: not in enabled drivers build config 00:03:06.104 net/vdev_netvsc: not in enabled drivers build config 00:03:06.104 net/vhost: not in enabled drivers build config 00:03:06.104 net/virtio: not in enabled drivers build config 00:03:06.104 net/vmxnet3: not in enabled drivers build config 00:03:06.104 raw/*: missing internal dependency, "rawdev" 00:03:06.104 crypto/armv8: not in enabled drivers build config 00:03:06.104 crypto/bcmfs: not in enabled drivers build config 00:03:06.104 crypto/caam_jr: not in enabled drivers build config 00:03:06.104 crypto/ccp: not in enabled drivers build config 00:03:06.104 crypto/cnxk: not in enabled drivers build config 00:03:06.104 crypto/dpaa_sec: not in enabled drivers build config 00:03:06.104 crypto/dpaa2_sec: not in enabled drivers build config 00:03:06.104 crypto/ipsec_mb: not in enabled drivers build config 00:03:06.104 crypto/mlx5: not in enabled drivers build config 00:03:06.104 crypto/mvsam: not in enabled drivers build config 00:03:06.104 crypto/nitrox: not in enabled drivers build config 00:03:06.104 crypto/null: not in enabled drivers build config 00:03:06.104 crypto/octeontx: not in enabled drivers build config 00:03:06.104 crypto/openssl: not in enabled drivers build config 00:03:06.104 crypto/scheduler: not in enabled drivers build config 00:03:06.104 crypto/uadk: not in enabled drivers build config 00:03:06.104 crypto/virtio: not in enabled drivers build config 00:03:06.104 compress/isal: not in enabled drivers build config 00:03:06.104 compress/mlx5: not in enabled drivers build config 00:03:06.104 compress/nitrox: not in enabled drivers build config 00:03:06.104 compress/octeontx: not in enabled drivers build config 00:03:06.104 compress/zlib: not in enabled drivers build config 00:03:06.104 regex/*: missing internal dependency, "regexdev" 00:03:06.104 ml/*: missing internal dependency, "mldev" 00:03:06.104 vdpa/ifc: not in enabled drivers build config 00:03:06.104 vdpa/mlx5: not in enabled drivers build config 00:03:06.104 vdpa/nfp: not in enabled drivers build config 00:03:06.104 vdpa/sfc: not in enabled drivers build config 00:03:06.104 event/*: missing internal dependency, "eventdev" 00:03:06.104 baseband/*: missing internal dependency, "bbdev" 00:03:06.104 gpu/*: missing internal dependency, "gpudev" 00:03:06.104 00:03:06.104 00:03:06.104 Build targets in project: 84 00:03:06.104 00:03:06.104 DPDK 24.03.0 00:03:06.104 00:03:06.104 User defined options 00:03:06.104 buildtype : debug 00:03:06.104 default_library : shared 00:03:06.104 libdir : lib 00:03:06.104 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:06.104 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:06.104 c_link_args : 00:03:06.104 cpu_instruction_set: native 00:03:06.104 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:06.104 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:06.104 enable_docs : false 00:03:06.104 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:06.104 enable_kmods : false 00:03:06.104 max_lcores : 128 00:03:06.104 tests : false 00:03:06.104 00:03:06.104 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:06.104 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:06.104 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:06.104 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:06.104 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:06.104 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:06.104 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:06.104 [6/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:06.104 [7/267] Linking static target lib/librte_kvargs.a 00:03:06.104 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:06.104 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:06.104 [10/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:06.104 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:06.104 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:06.104 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:06.104 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:06.104 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:06.104 [16/267] Linking static target lib/librte_log.a 00:03:06.104 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:06.104 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:06.104 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:06.104 [20/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:06.104 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:06.104 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:06.104 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:06.104 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:06.104 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:06.104 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:06.104 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:06.104 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:06.104 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:06.104 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:06.104 [31/267] Linking static target lib/librte_pci.a 00:03:06.104 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:06.104 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:06.104 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:06.104 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:06.104 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:06.104 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:06.104 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:06.363 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:06.363 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:06.363 [41/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:06.363 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:06.363 [43/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:06.363 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:06.363 [45/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.363 [46/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.363 [47/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:06.363 [48/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:06.363 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:06.363 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:06.363 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:06.363 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:06.363 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:06.363 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:06.363 [55/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:06.363 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:06.363 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:06.363 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:06.363 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:06.363 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:06.363 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:06.363 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:06.363 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:06.363 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:06.363 [65/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:06.363 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:06.363 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:06.363 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:06.363 [69/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:06.363 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:06.363 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:06.363 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:06.363 [73/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:06.363 [74/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:06.363 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:06.363 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:06.363 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:06.363 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:06.363 [79/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:06.363 [80/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:06.363 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:06.363 [82/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:06.363 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:06.363 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:06.363 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:06.363 [86/267] Linking static target lib/librte_ring.a 00:03:06.363 [87/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:06.363 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:06.363 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:06.363 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:06.363 [91/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:06.363 [92/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:06.363 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:06.363 [94/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:06.363 [95/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:06.623 [96/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:06.623 [97/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:06.623 [98/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:06.623 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:06.623 [100/267] Linking static target lib/librte_telemetry.a 00:03:06.623 [101/267] Linking static target lib/librte_timer.a 00:03:06.623 [102/267] Linking static target lib/librte_meter.a 00:03:06.623 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:06.623 [104/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:06.623 [105/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:06.623 [106/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:06.623 [107/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:06.623 [108/267] Linking static target lib/librte_dmadev.a 00:03:06.623 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:06.623 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:06.623 [111/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:06.623 [112/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:06.623 [113/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:06.623 [114/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:06.623 [115/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:06.623 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:06.623 [117/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:06.623 [118/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:06.623 [119/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:06.623 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:06.623 [121/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:06.623 [122/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:06.623 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:06.623 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:06.623 [125/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:06.623 [126/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:06.623 [127/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:06.623 [128/267] Linking static target lib/librte_cmdline.a 00:03:06.623 [129/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:06.623 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:06.623 [131/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:06.623 [132/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:06.623 [133/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:06.623 [134/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:06.623 [135/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:06.623 [136/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:06.623 [137/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:06.623 [138/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:06.623 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:06.623 [140/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:06.623 [141/267] Linking static target lib/librte_net.a 00:03:06.623 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:06.623 [143/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:06.623 [144/267] Linking static target lib/librte_mempool.a 00:03:06.623 [145/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:06.623 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:06.623 [147/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.623 [148/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:06.623 [149/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:06.623 [150/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:06.623 [151/267] Linking static target lib/librte_power.a 00:03:06.623 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:06.623 [153/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:06.623 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:06.623 [155/267] Linking static target lib/librte_rcu.a 00:03:06.623 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:06.623 [157/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:06.623 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:06.623 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:06.623 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:06.623 [161/267] Linking target lib/librte_log.so.24.1 00:03:06.623 [162/267] Linking static target lib/librte_compressdev.a 00:03:06.623 [163/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:06.623 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:06.623 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:06.623 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:06.623 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:06.623 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:06.623 [169/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:06.624 [170/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:06.624 [171/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:06.624 [172/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:06.624 [173/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:06.624 [174/267] Linking static target lib/librte_security.a 00:03:06.624 [175/267] Linking static target drivers/librte_bus_vdev.a 00:03:06.624 [176/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:06.624 [177/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:06.624 [178/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:06.624 [179/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:06.624 [180/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:06.624 [181/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:06.624 [182/267] Linking static target lib/librte_reorder.a 00:03:06.624 [183/267] Linking static target lib/librte_eal.a 00:03:06.624 [184/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:06.884 [185/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.884 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:06.884 [187/267] Linking target lib/librte_kvargs.so.24.1 00:03:06.884 [188/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:06.884 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:06.884 [190/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.884 [191/267] Linking static target lib/librte_mbuf.a 00:03:06.884 [192/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:06.884 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:06.884 [194/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:06.884 [195/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:06.884 [196/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:06.884 [197/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:06.884 [198/267] Linking static target lib/librte_hash.a 00:03:06.884 [199/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:06.884 [200/267] Linking static target drivers/librte_mempool_ring.a 00:03:06.884 [201/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.884 [202/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.884 [203/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.884 [204/267] Linking static target drivers/librte_bus_pci.a 00:03:06.884 [205/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.144 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:07.144 [207/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.144 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:07.144 [209/267] Linking static target lib/librte_cryptodev.a 00:03:07.144 [210/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.144 [211/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.144 [212/267] Linking target lib/librte_telemetry.so.24.1 00:03:07.144 [213/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.144 [214/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.404 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:07.404 [216/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:07.404 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.404 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.665 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:07.665 [220/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.665 [221/267] Linking static target lib/librte_ethdev.a 00:03:07.665 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.665 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.932 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.932 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.932 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.588 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:08.588 [228/267] Linking static target lib/librte_vhost.a 00:03:09.159 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.592 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.176 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.112 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.372 [233/267] Linking target lib/librte_eal.so.24.1 00:03:18.372 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:18.372 [235/267] Linking target lib/librte_ring.so.24.1 00:03:18.372 [236/267] Linking target lib/librte_timer.so.24.1 00:03:18.372 [237/267] Linking target lib/librte_pci.so.24.1 00:03:18.372 [238/267] Linking target lib/librte_meter.so.24.1 00:03:18.372 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:18.372 [240/267] Linking target lib/librte_dmadev.so.24.1 00:03:18.633 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:18.633 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:18.633 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:18.633 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:18.633 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:18.633 [246/267] Linking target lib/librte_rcu.so.24.1 00:03:18.633 [247/267] Linking target lib/librte_mempool.so.24.1 00:03:18.633 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:18.633 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:18.893 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:18.893 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:18.893 [252/267] Linking target lib/librte_mbuf.so.24.1 00:03:18.893 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:18.893 [254/267] Linking target lib/librte_net.so.24.1 00:03:18.893 [255/267] Linking target lib/librte_compressdev.so.24.1 00:03:18.893 [256/267] Linking target lib/librte_reorder.so.24.1 00:03:18.893 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:03:19.153 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:19.153 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:19.153 [260/267] Linking target lib/librte_hash.so.24.1 00:03:19.153 [261/267] Linking target lib/librte_cmdline.so.24.1 00:03:19.153 [262/267] Linking target lib/librte_ethdev.so.24.1 00:03:19.153 [263/267] Linking target lib/librte_security.so.24.1 00:03:19.153 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:19.153 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:19.413 [266/267] Linking target lib/librte_vhost.so.24.1 00:03:19.413 [267/267] Linking target lib/librte_power.so.24.1 00:03:19.413 INFO: autodetecting backend as ninja 00:03:19.413 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:22.706 CC lib/ut/ut.o 00:03:22.706 CC lib/log/log.o 00:03:22.706 CC lib/log/log_flags.o 00:03:22.706 CC lib/log/log_deprecated.o 00:03:22.706 CC lib/ut_mock/mock.o 00:03:22.706 LIB libspdk_ut_mock.a 00:03:22.706 LIB libspdk_ut.a 00:03:22.706 LIB libspdk_log.a 00:03:22.706 SO libspdk_ut.so.2.0 00:03:22.706 SO libspdk_ut_mock.so.6.0 00:03:22.706 SO libspdk_log.so.7.1 00:03:22.706 SYMLINK libspdk_ut.so 00:03:22.706 SYMLINK libspdk_ut_mock.so 00:03:22.706 SYMLINK libspdk_log.so 00:03:22.967 CC lib/dma/dma.o 00:03:22.967 CC lib/ioat/ioat.o 00:03:22.967 CXX lib/trace_parser/trace.o 00:03:22.967 CC lib/util/base64.o 00:03:22.967 CC lib/util/bit_array.o 00:03:22.967 CC lib/util/crc16.o 00:03:22.967 CC lib/util/cpuset.o 00:03:22.967 CC lib/util/crc32_ieee.o 00:03:22.967 CC lib/util/crc32.o 00:03:22.967 CC lib/util/crc32c.o 00:03:22.967 CC lib/util/crc64.o 00:03:22.967 CC lib/util/dif.o 00:03:22.967 CC lib/util/fd.o 00:03:22.967 CC lib/util/fd_group.o 00:03:22.967 CC lib/util/file.o 00:03:22.967 CC lib/util/hexlify.o 00:03:22.967 CC lib/util/iov.o 00:03:22.967 CC lib/util/math.o 00:03:22.967 CC lib/util/net.o 00:03:22.967 CC lib/util/pipe.o 00:03:22.967 CC lib/util/strerror_tls.o 00:03:22.967 CC lib/util/string.o 00:03:22.967 CC lib/util/uuid.o 00:03:22.967 CC lib/util/xor.o 00:03:22.967 CC lib/util/zipf.o 00:03:22.967 CC lib/util/md5.o 00:03:22.967 CC lib/vfio_user/host/vfio_user.o 00:03:22.967 CC lib/vfio_user/host/vfio_user_pci.o 00:03:22.967 LIB libspdk_dma.a 00:03:23.229 SO libspdk_dma.so.5.0 00:03:23.229 LIB libspdk_ioat.a 00:03:23.229 SYMLINK libspdk_dma.so 00:03:23.229 SO libspdk_ioat.so.7.0 00:03:23.229 SYMLINK libspdk_ioat.so 00:03:23.229 LIB libspdk_vfio_user.a 00:03:23.229 SO libspdk_vfio_user.so.5.0 00:03:23.489 SYMLINK libspdk_vfio_user.so 00:03:23.489 LIB libspdk_util.a 00:03:23.489 SO libspdk_util.so.10.1 00:03:23.750 SYMLINK libspdk_util.so 00:03:23.750 LIB libspdk_trace_parser.a 00:03:23.750 SO libspdk_trace_parser.so.6.0 00:03:23.750 SYMLINK libspdk_trace_parser.so 00:03:24.010 CC lib/vmd/vmd.o 00:03:24.010 CC lib/vmd/led.o 00:03:24.010 CC lib/conf/conf.o 00:03:24.010 CC lib/idxd/idxd.o 00:03:24.010 CC lib/json/json_parse.o 00:03:24.010 CC lib/idxd/idxd_user.o 00:03:24.010 CC lib/json/json_util.o 00:03:24.010 CC lib/idxd/idxd_kernel.o 00:03:24.010 CC lib/json/json_write.o 00:03:24.010 CC lib/rdma_utils/rdma_utils.o 00:03:24.010 CC lib/env_dpdk/env.o 00:03:24.010 CC lib/env_dpdk/memory.o 00:03:24.010 CC lib/env_dpdk/init.o 00:03:24.010 CC lib/env_dpdk/pci.o 00:03:24.010 CC lib/env_dpdk/threads.o 00:03:24.010 CC lib/env_dpdk/pci_ioat.o 00:03:24.010 CC lib/env_dpdk/pci_virtio.o 00:03:24.010 CC lib/env_dpdk/pci_vmd.o 00:03:24.010 CC lib/env_dpdk/pci_idxd.o 00:03:24.010 CC lib/env_dpdk/pci_event.o 00:03:24.010 CC lib/env_dpdk/sigbus_handler.o 00:03:24.010 CC lib/env_dpdk/pci_dpdk.o 00:03:24.010 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:24.010 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:24.270 LIB libspdk_conf.a 00:03:24.270 SO libspdk_conf.so.6.0 00:03:24.270 LIB libspdk_rdma_utils.a 00:03:24.270 LIB libspdk_json.a 00:03:24.270 SYMLINK libspdk_conf.so 00:03:24.270 SO libspdk_rdma_utils.so.1.0 00:03:24.270 SO libspdk_json.so.6.0 00:03:24.550 SYMLINK libspdk_rdma_utils.so 00:03:24.550 SYMLINK libspdk_json.so 00:03:24.550 LIB libspdk_idxd.a 00:03:24.550 LIB libspdk_vmd.a 00:03:24.550 SO libspdk_idxd.so.12.1 00:03:24.550 SO libspdk_vmd.so.6.0 00:03:24.810 SYMLINK libspdk_idxd.so 00:03:24.810 SYMLINK libspdk_vmd.so 00:03:24.810 CC lib/rdma_provider/common.o 00:03:24.810 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:24.810 CC lib/jsonrpc/jsonrpc_server.o 00:03:24.810 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:24.810 CC lib/jsonrpc/jsonrpc_client.o 00:03:24.810 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:25.071 LIB libspdk_rdma_provider.a 00:03:25.071 LIB libspdk_jsonrpc.a 00:03:25.071 SO libspdk_rdma_provider.so.7.0 00:03:25.071 SO libspdk_jsonrpc.so.6.0 00:03:25.071 SYMLINK libspdk_rdma_provider.so 00:03:25.071 SYMLINK libspdk_jsonrpc.so 00:03:25.331 LIB libspdk_env_dpdk.a 00:03:25.331 SO libspdk_env_dpdk.so.15.1 00:03:25.592 SYMLINK libspdk_env_dpdk.so 00:03:25.592 CC lib/rpc/rpc.o 00:03:25.853 LIB libspdk_rpc.a 00:03:25.853 SO libspdk_rpc.so.6.0 00:03:25.853 SYMLINK libspdk_rpc.so 00:03:26.114 CC lib/keyring/keyring.o 00:03:26.114 CC lib/keyring/keyring_rpc.o 00:03:26.114 CC lib/trace/trace.o 00:03:26.114 CC lib/trace/trace_flags.o 00:03:26.114 CC lib/trace/trace_rpc.o 00:03:26.114 CC lib/notify/notify.o 00:03:26.114 CC lib/notify/notify_rpc.o 00:03:26.376 LIB libspdk_notify.a 00:03:26.376 SO libspdk_notify.so.6.0 00:03:26.376 LIB libspdk_keyring.a 00:03:26.376 LIB libspdk_trace.a 00:03:26.376 SO libspdk_keyring.so.2.0 00:03:26.638 SYMLINK libspdk_notify.so 00:03:26.638 SO libspdk_trace.so.11.0 00:03:26.638 SYMLINK libspdk_keyring.so 00:03:26.638 SYMLINK libspdk_trace.so 00:03:26.899 CC lib/sock/sock.o 00:03:26.899 CC lib/sock/sock_rpc.o 00:03:26.899 CC lib/thread/thread.o 00:03:26.899 CC lib/thread/iobuf.o 00:03:27.470 LIB libspdk_sock.a 00:03:27.470 SO libspdk_sock.so.10.0 00:03:27.470 SYMLINK libspdk_sock.so 00:03:27.731 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:27.731 CC lib/nvme/nvme_ctrlr.o 00:03:27.731 CC lib/nvme/nvme_fabric.o 00:03:27.731 CC lib/nvme/nvme_ns_cmd.o 00:03:27.731 CC lib/nvme/nvme_ns.o 00:03:27.731 CC lib/nvme/nvme_pcie_common.o 00:03:27.731 CC lib/nvme/nvme_pcie.o 00:03:27.731 CC lib/nvme/nvme_qpair.o 00:03:27.731 CC lib/nvme/nvme.o 00:03:27.731 CC lib/nvme/nvme_quirks.o 00:03:27.731 CC lib/nvme/nvme_transport.o 00:03:27.731 CC lib/nvme/nvme_discovery.o 00:03:27.731 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:27.731 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:27.731 CC lib/nvme/nvme_tcp.o 00:03:27.731 CC lib/nvme/nvme_opal.o 00:03:27.731 CC lib/nvme/nvme_io_msg.o 00:03:27.731 CC lib/nvme/nvme_poll_group.o 00:03:27.731 CC lib/nvme/nvme_zns.o 00:03:27.731 CC lib/nvme/nvme_stubs.o 00:03:27.731 CC lib/nvme/nvme_auth.o 00:03:27.731 CC lib/nvme/nvme_cuse.o 00:03:27.731 CC lib/nvme/nvme_vfio_user.o 00:03:27.731 CC lib/nvme/nvme_rdma.o 00:03:28.300 LIB libspdk_thread.a 00:03:28.300 SO libspdk_thread.so.11.0 00:03:28.300 SYMLINK libspdk_thread.so 00:03:28.874 CC lib/blob/blobstore.o 00:03:28.874 CC lib/blob/zeroes.o 00:03:28.874 CC lib/blob/request.o 00:03:28.874 CC lib/blob/blob_bs_dev.o 00:03:28.874 CC lib/virtio/virtio.o 00:03:28.874 CC lib/accel/accel.o 00:03:28.874 CC lib/virtio/virtio_vhost_user.o 00:03:28.874 CC lib/accel/accel_rpc.o 00:03:28.874 CC lib/virtio/virtio_vfio_user.o 00:03:28.874 CC lib/accel/accel_sw.o 00:03:28.874 CC lib/fsdev/fsdev.o 00:03:28.874 CC lib/virtio/virtio_pci.o 00:03:28.874 CC lib/fsdev/fsdev_io.o 00:03:28.874 CC lib/fsdev/fsdev_rpc.o 00:03:28.874 CC lib/vfu_tgt/tgt_endpoint.o 00:03:28.874 CC lib/vfu_tgt/tgt_rpc.o 00:03:28.874 CC lib/init/json_config.o 00:03:28.874 CC lib/init/rpc.o 00:03:28.874 CC lib/init/subsystem.o 00:03:28.874 CC lib/init/subsystem_rpc.o 00:03:28.874 LIB libspdk_init.a 00:03:29.135 SO libspdk_init.so.6.0 00:03:29.135 LIB libspdk_virtio.a 00:03:29.135 LIB libspdk_vfu_tgt.a 00:03:29.135 SO libspdk_virtio.so.7.0 00:03:29.135 SO libspdk_vfu_tgt.so.3.0 00:03:29.135 SYMLINK libspdk_init.so 00:03:29.135 SYMLINK libspdk_virtio.so 00:03:29.135 SYMLINK libspdk_vfu_tgt.so 00:03:29.398 LIB libspdk_fsdev.a 00:03:29.398 SO libspdk_fsdev.so.2.0 00:03:29.398 SYMLINK libspdk_fsdev.so 00:03:29.398 CC lib/event/app.o 00:03:29.398 CC lib/event/reactor.o 00:03:29.398 CC lib/event/log_rpc.o 00:03:29.398 CC lib/event/app_rpc.o 00:03:29.398 CC lib/event/scheduler_static.o 00:03:29.660 LIB libspdk_nvme.a 00:03:29.660 LIB libspdk_accel.a 00:03:29.660 SO libspdk_accel.so.16.0 00:03:29.660 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:29.922 SYMLINK libspdk_accel.so 00:03:29.922 SO libspdk_nvme.so.15.0 00:03:29.922 LIB libspdk_event.a 00:03:29.922 SO libspdk_event.so.14.0 00:03:29.922 SYMLINK libspdk_event.so 00:03:30.185 SYMLINK libspdk_nvme.so 00:03:30.185 CC lib/bdev/bdev.o 00:03:30.185 CC lib/bdev/bdev_rpc.o 00:03:30.185 CC lib/bdev/bdev_zone.o 00:03:30.185 CC lib/bdev/part.o 00:03:30.185 CC lib/bdev/scsi_nvme.o 00:03:30.446 LIB libspdk_fuse_dispatcher.a 00:03:30.446 SO libspdk_fuse_dispatcher.so.1.0 00:03:30.446 SYMLINK libspdk_fuse_dispatcher.so 00:03:31.393 LIB libspdk_blob.a 00:03:31.393 SO libspdk_blob.so.12.0 00:03:31.393 SYMLINK libspdk_blob.so 00:03:31.655 CC lib/blobfs/blobfs.o 00:03:31.655 CC lib/blobfs/tree.o 00:03:31.655 CC lib/lvol/lvol.o 00:03:32.601 LIB libspdk_blobfs.a 00:03:32.601 SO libspdk_blobfs.so.11.0 00:03:32.601 LIB libspdk_bdev.a 00:03:32.601 LIB libspdk_lvol.a 00:03:32.601 SO libspdk_bdev.so.17.0 00:03:32.601 SYMLINK libspdk_blobfs.so 00:03:32.601 SO libspdk_lvol.so.11.0 00:03:32.601 SYMLINK libspdk_bdev.so 00:03:32.601 SYMLINK libspdk_lvol.so 00:03:33.188 CC lib/ftl/ftl_core.o 00:03:33.188 CC lib/ftl/ftl_init.o 00:03:33.188 CC lib/ftl/ftl_layout.o 00:03:33.188 CC lib/nvmf/ctrlr.o 00:03:33.188 CC lib/ftl/ftl_debug.o 00:03:33.188 CC lib/nvmf/ctrlr_discovery.o 00:03:33.188 CC lib/ftl/ftl_io.o 00:03:33.188 CC lib/nvmf/ctrlr_bdev.o 00:03:33.188 CC lib/ftl/ftl_sb.o 00:03:33.188 CC lib/nvmf/subsystem.o 00:03:33.188 CC lib/ftl/ftl_l2p.o 00:03:33.188 CC lib/nvmf/nvmf.o 00:03:33.188 CC lib/nvmf/nvmf_rpc.o 00:03:33.188 CC lib/ftl/ftl_l2p_flat.o 00:03:33.188 CC lib/nvmf/transport.o 00:03:33.188 CC lib/ftl/ftl_nv_cache.o 00:03:33.188 CC lib/nvmf/tcp.o 00:03:33.188 CC lib/ftl/ftl_band.o 00:03:33.188 CC lib/nbd/nbd.o 00:03:33.188 CC lib/ftl/ftl_band_ops.o 00:03:33.188 CC lib/nvmf/stubs.o 00:03:33.188 CC lib/scsi/dev.o 00:03:33.188 CC lib/ftl/ftl_writer.o 00:03:33.188 CC lib/nvmf/mdns_server.o 00:03:33.188 CC lib/scsi/lun.o 00:03:33.188 CC lib/nvmf/vfio_user.o 00:03:33.188 CC lib/scsi/port.o 00:03:33.188 CC lib/ftl/ftl_rq.o 00:03:33.188 CC lib/nvmf/rdma.o 00:03:33.188 CC lib/nbd/nbd_rpc.o 00:03:33.188 CC lib/ublk/ublk.o 00:03:33.188 CC lib/scsi/scsi.o 00:03:33.188 CC lib/ublk/ublk_rpc.o 00:03:33.188 CC lib/ftl/ftl_reloc.o 00:03:33.188 CC lib/nvmf/auth.o 00:03:33.188 CC lib/ftl/ftl_l2p_cache.o 00:03:33.188 CC lib/scsi/scsi_bdev.o 00:03:33.188 CC lib/ftl/ftl_p2l.o 00:03:33.188 CC lib/scsi/scsi_pr.o 00:03:33.188 CC lib/ftl/ftl_p2l_log.o 00:03:33.188 CC lib/scsi/task.o 00:03:33.188 CC lib/scsi/scsi_rpc.o 00:03:33.188 CC lib/ftl/mngt/ftl_mngt.o 00:03:33.188 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:33.188 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:33.188 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:33.188 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:33.188 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:33.188 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:33.188 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:33.188 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:33.188 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:33.188 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:33.188 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:33.188 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:33.188 CC lib/ftl/utils/ftl_conf.o 00:03:33.188 CC lib/ftl/utils/ftl_md.o 00:03:33.188 CC lib/ftl/utils/ftl_mempool.o 00:03:33.188 CC lib/ftl/utils/ftl_bitmap.o 00:03:33.188 CC lib/ftl/utils/ftl_property.o 00:03:33.188 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:33.188 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:33.188 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:33.188 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:33.188 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:33.188 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:33.188 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:33.188 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:33.188 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:33.188 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:33.188 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:33.188 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:33.188 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:33.188 CC lib/ftl/base/ftl_base_dev.o 00:03:33.188 CC lib/ftl/ftl_trace.o 00:03:33.188 CC lib/ftl/base/ftl_base_bdev.o 00:03:33.448 LIB libspdk_nbd.a 00:03:33.709 SO libspdk_nbd.so.7.0 00:03:33.709 LIB libspdk_scsi.a 00:03:33.709 SYMLINK libspdk_nbd.so 00:03:33.709 SO libspdk_scsi.so.9.0 00:03:33.709 LIB libspdk_ublk.a 00:03:33.709 SYMLINK libspdk_scsi.so 00:03:33.709 SO libspdk_ublk.so.3.0 00:03:33.970 SYMLINK libspdk_ublk.so 00:03:33.970 LIB libspdk_ftl.a 00:03:34.229 CC lib/vhost/vhost.o 00:03:34.229 CC lib/vhost/vhost_rpc.o 00:03:34.229 CC lib/vhost/vhost_scsi.o 00:03:34.229 CC lib/vhost/vhost_blk.o 00:03:34.229 CC lib/vhost/rte_vhost_user.o 00:03:34.229 CC lib/iscsi/init_grp.o 00:03:34.229 CC lib/iscsi/conn.o 00:03:34.229 CC lib/iscsi/iscsi.o 00:03:34.229 CC lib/iscsi/param.o 00:03:34.229 CC lib/iscsi/portal_grp.o 00:03:34.229 CC lib/iscsi/tgt_node.o 00:03:34.229 CC lib/iscsi/iscsi_subsystem.o 00:03:34.229 CC lib/iscsi/iscsi_rpc.o 00:03:34.229 CC lib/iscsi/task.o 00:03:34.229 SO libspdk_ftl.so.9.0 00:03:34.488 SYMLINK libspdk_ftl.so 00:03:35.058 LIB libspdk_nvmf.a 00:03:35.058 SO libspdk_nvmf.so.20.0 00:03:35.058 LIB libspdk_vhost.a 00:03:35.058 SO libspdk_vhost.so.8.0 00:03:35.318 SYMLINK libspdk_nvmf.so 00:03:35.318 SYMLINK libspdk_vhost.so 00:03:35.318 LIB libspdk_iscsi.a 00:03:35.318 SO libspdk_iscsi.so.8.0 00:03:35.579 SYMLINK libspdk_iscsi.so 00:03:36.150 CC module/vfu_device/vfu_virtio.o 00:03:36.150 CC module/vfu_device/vfu_virtio_rpc.o 00:03:36.150 CC module/vfu_device/vfu_virtio_blk.o 00:03:36.150 CC module/vfu_device/vfu_virtio_scsi.o 00:03:36.150 CC module/env_dpdk/env_dpdk_rpc.o 00:03:36.150 CC module/vfu_device/vfu_virtio_fs.o 00:03:36.150 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:36.150 CC module/accel/dsa/accel_dsa.o 00:03:36.150 CC module/accel/dsa/accel_dsa_rpc.o 00:03:36.150 CC module/accel/iaa/accel_iaa.o 00:03:36.150 CC module/scheduler/gscheduler/gscheduler.o 00:03:36.150 CC module/sock/posix/posix.o 00:03:36.150 CC module/accel/iaa/accel_iaa_rpc.o 00:03:36.150 CC module/blob/bdev/blob_bdev.o 00:03:36.150 LIB libspdk_env_dpdk_rpc.a 00:03:36.150 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:36.150 CC module/keyring/file/keyring.o 00:03:36.150 CC module/keyring/linux/keyring_rpc.o 00:03:36.150 CC module/keyring/linux/keyring.o 00:03:36.150 CC module/fsdev/aio/fsdev_aio.o 00:03:36.150 CC module/accel/ioat/accel_ioat.o 00:03:36.150 CC module/keyring/file/keyring_rpc.o 00:03:36.150 CC module/accel/error/accel_error.o 00:03:36.150 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:36.150 CC module/accel/ioat/accel_ioat_rpc.o 00:03:36.150 CC module/fsdev/aio/linux_aio_mgr.o 00:03:36.150 CC module/accel/error/accel_error_rpc.o 00:03:36.411 SO libspdk_env_dpdk_rpc.so.6.0 00:03:36.411 SYMLINK libspdk_env_dpdk_rpc.so 00:03:36.411 LIB libspdk_scheduler_dpdk_governor.a 00:03:36.411 LIB libspdk_keyring_linux.a 00:03:36.411 LIB libspdk_keyring_file.a 00:03:36.411 LIB libspdk_scheduler_gscheduler.a 00:03:36.411 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:36.411 SO libspdk_keyring_linux.so.1.0 00:03:36.411 SO libspdk_scheduler_gscheduler.so.4.0 00:03:36.411 LIB libspdk_accel_ioat.a 00:03:36.411 SO libspdk_keyring_file.so.2.0 00:03:36.411 LIB libspdk_scheduler_dynamic.a 00:03:36.411 LIB libspdk_accel_iaa.a 00:03:36.411 LIB libspdk_accel_error.a 00:03:36.411 SO libspdk_accel_ioat.so.6.0 00:03:36.411 SO libspdk_accel_iaa.so.3.0 00:03:36.411 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:36.411 SO libspdk_scheduler_dynamic.so.4.0 00:03:36.411 SYMLINK libspdk_keyring_linux.so 00:03:36.411 SYMLINK libspdk_scheduler_gscheduler.so 00:03:36.411 LIB libspdk_blob_bdev.a 00:03:36.411 SO libspdk_accel_error.so.2.0 00:03:36.411 SYMLINK libspdk_keyring_file.so 00:03:36.672 LIB libspdk_accel_dsa.a 00:03:36.672 SO libspdk_blob_bdev.so.12.0 00:03:36.672 SYMLINK libspdk_accel_ioat.so 00:03:36.672 SYMLINK libspdk_scheduler_dynamic.so 00:03:36.672 SYMLINK libspdk_accel_iaa.so 00:03:36.672 SYMLINK libspdk_accel_error.so 00:03:36.672 SO libspdk_accel_dsa.so.5.0 00:03:36.672 SYMLINK libspdk_blob_bdev.so 00:03:36.672 SYMLINK libspdk_accel_dsa.so 00:03:36.672 LIB libspdk_vfu_device.a 00:03:36.672 SO libspdk_vfu_device.so.3.0 00:03:36.934 SYMLINK libspdk_vfu_device.so 00:03:36.934 LIB libspdk_fsdev_aio.a 00:03:36.934 LIB libspdk_sock_posix.a 00:03:36.934 SO libspdk_fsdev_aio.so.1.0 00:03:36.934 SO libspdk_sock_posix.so.6.0 00:03:36.934 SYMLINK libspdk_fsdev_aio.so 00:03:37.194 SYMLINK libspdk_sock_posix.so 00:03:37.194 CC module/bdev/error/vbdev_error.o 00:03:37.194 CC module/bdev/error/vbdev_error_rpc.o 00:03:37.194 CC module/bdev/lvol/vbdev_lvol.o 00:03:37.194 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:37.194 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:37.194 CC module/bdev/malloc/bdev_malloc.o 00:03:37.194 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:37.194 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:37.194 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:37.194 CC module/bdev/gpt/gpt.o 00:03:37.194 CC module/bdev/gpt/vbdev_gpt.o 00:03:37.194 CC module/bdev/delay/vbdev_delay.o 00:03:37.194 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:37.194 CC module/bdev/raid/bdev_raid.o 00:03:37.194 CC module/bdev/raid/bdev_raid_rpc.o 00:03:37.194 CC module/bdev/raid/bdev_raid_sb.o 00:03:37.194 CC module/bdev/raid/raid0.o 00:03:37.194 CC module/bdev/nvme/bdev_nvme.o 00:03:37.194 CC module/bdev/raid/raid1.o 00:03:37.194 CC module/bdev/raid/concat.o 00:03:37.194 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:37.194 CC module/bdev/nvme/nvme_rpc.o 00:03:37.194 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:37.194 CC module/bdev/ftl/bdev_ftl.o 00:03:37.194 CC module/bdev/iscsi/bdev_iscsi.o 00:03:37.194 CC module/blobfs/bdev/blobfs_bdev.o 00:03:37.194 CC module/bdev/nvme/bdev_mdns_client.o 00:03:37.194 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:37.195 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:37.195 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:37.195 CC module/bdev/nvme/vbdev_opal.o 00:03:37.195 CC module/bdev/passthru/vbdev_passthru.o 00:03:37.195 CC module/bdev/split/vbdev_split.o 00:03:37.195 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:37.195 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:37.195 CC module/bdev/aio/bdev_aio.o 00:03:37.195 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:37.195 CC module/bdev/aio/bdev_aio_rpc.o 00:03:37.195 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:37.195 CC module/bdev/null/bdev_null.o 00:03:37.195 CC module/bdev/split/vbdev_split_rpc.o 00:03:37.195 CC module/bdev/null/bdev_null_rpc.o 00:03:37.455 LIB libspdk_blobfs_bdev.a 00:03:37.455 LIB libspdk_bdev_split.a 00:03:37.455 SO libspdk_blobfs_bdev.so.6.0 00:03:37.455 LIB libspdk_bdev_error.a 00:03:37.455 LIB libspdk_bdev_gpt.a 00:03:37.455 SO libspdk_bdev_split.so.6.0 00:03:37.455 LIB libspdk_bdev_null.a 00:03:37.455 SO libspdk_bdev_error.so.6.0 00:03:37.455 SO libspdk_bdev_gpt.so.6.0 00:03:37.455 LIB libspdk_bdev_ftl.a 00:03:37.716 SO libspdk_bdev_null.so.6.0 00:03:37.716 SYMLINK libspdk_blobfs_bdev.so 00:03:37.716 LIB libspdk_bdev_passthru.a 00:03:37.716 SYMLINK libspdk_bdev_split.so 00:03:37.716 LIB libspdk_bdev_zone_block.a 00:03:37.716 LIB libspdk_bdev_malloc.a 00:03:37.716 LIB libspdk_bdev_delay.a 00:03:37.716 LIB libspdk_bdev_aio.a 00:03:37.716 SO libspdk_bdev_ftl.so.6.0 00:03:37.716 SYMLINK libspdk_bdev_gpt.so 00:03:37.716 SYMLINK libspdk_bdev_error.so 00:03:37.716 SO libspdk_bdev_zone_block.so.6.0 00:03:37.716 SO libspdk_bdev_malloc.so.6.0 00:03:37.716 SO libspdk_bdev_passthru.so.6.0 00:03:37.716 LIB libspdk_bdev_iscsi.a 00:03:37.716 SO libspdk_bdev_delay.so.6.0 00:03:37.716 SYMLINK libspdk_bdev_null.so 00:03:37.716 SO libspdk_bdev_aio.so.6.0 00:03:37.716 SO libspdk_bdev_iscsi.so.6.0 00:03:37.716 SYMLINK libspdk_bdev_ftl.so 00:03:37.716 SYMLINK libspdk_bdev_zone_block.so 00:03:37.716 SYMLINK libspdk_bdev_passthru.so 00:03:37.716 SYMLINK libspdk_bdev_malloc.so 00:03:37.716 SYMLINK libspdk_bdev_aio.so 00:03:37.716 LIB libspdk_bdev_lvol.a 00:03:37.716 SYMLINK libspdk_bdev_iscsi.so 00:03:37.716 SYMLINK libspdk_bdev_delay.so 00:03:37.716 SO libspdk_bdev_lvol.so.6.0 00:03:37.716 LIB libspdk_bdev_virtio.a 00:03:37.716 SO libspdk_bdev_virtio.so.6.0 00:03:37.977 SYMLINK libspdk_bdev_lvol.so 00:03:37.977 SYMLINK libspdk_bdev_virtio.so 00:03:38.239 LIB libspdk_bdev_raid.a 00:03:38.239 SO libspdk_bdev_raid.so.6.0 00:03:38.239 SYMLINK libspdk_bdev_raid.so 00:03:39.646 LIB libspdk_bdev_nvme.a 00:03:39.646 SO libspdk_bdev_nvme.so.7.1 00:03:39.646 SYMLINK libspdk_bdev_nvme.so 00:03:40.590 CC module/event/subsystems/iobuf/iobuf.o 00:03:40.590 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:40.590 CC module/event/subsystems/scheduler/scheduler.o 00:03:40.590 CC module/event/subsystems/fsdev/fsdev.o 00:03:40.590 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:40.590 CC module/event/subsystems/sock/sock.o 00:03:40.590 CC module/event/subsystems/keyring/keyring.o 00:03:40.590 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:40.590 CC module/event/subsystems/vmd/vmd.o 00:03:40.590 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:40.590 LIB libspdk_event_iobuf.a 00:03:40.590 LIB libspdk_event_scheduler.a 00:03:40.590 LIB libspdk_event_keyring.a 00:03:40.590 LIB libspdk_event_vhost_blk.a 00:03:40.590 LIB libspdk_event_fsdev.a 00:03:40.590 LIB libspdk_event_sock.a 00:03:40.590 LIB libspdk_event_vmd.a 00:03:40.590 LIB libspdk_event_vfu_tgt.a 00:03:40.590 SO libspdk_event_keyring.so.1.0 00:03:40.590 SO libspdk_event_scheduler.so.4.0 00:03:40.590 SO libspdk_event_iobuf.so.3.0 00:03:40.590 SO libspdk_event_vhost_blk.so.3.0 00:03:40.590 SO libspdk_event_fsdev.so.1.0 00:03:40.590 SO libspdk_event_sock.so.5.0 00:03:40.590 SO libspdk_event_vmd.so.6.0 00:03:40.590 SO libspdk_event_vfu_tgt.so.3.0 00:03:40.590 SYMLINK libspdk_event_keyring.so 00:03:40.590 SYMLINK libspdk_event_scheduler.so 00:03:40.590 SYMLINK libspdk_event_iobuf.so 00:03:40.852 SYMLINK libspdk_event_vhost_blk.so 00:03:40.852 SYMLINK libspdk_event_fsdev.so 00:03:40.852 SYMLINK libspdk_event_sock.so 00:03:40.852 SYMLINK libspdk_event_vfu_tgt.so 00:03:40.852 SYMLINK libspdk_event_vmd.so 00:03:41.113 CC module/event/subsystems/accel/accel.o 00:03:41.113 LIB libspdk_event_accel.a 00:03:41.374 SO libspdk_event_accel.so.6.0 00:03:41.374 SYMLINK libspdk_event_accel.so 00:03:41.635 CC module/event/subsystems/bdev/bdev.o 00:03:41.895 LIB libspdk_event_bdev.a 00:03:41.895 SO libspdk_event_bdev.so.6.0 00:03:41.895 SYMLINK libspdk_event_bdev.so 00:03:42.156 CC module/event/subsystems/nbd/nbd.o 00:03:42.418 CC module/event/subsystems/scsi/scsi.o 00:03:42.418 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:42.418 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:42.418 CC module/event/subsystems/ublk/ublk.o 00:03:42.418 LIB libspdk_event_nbd.a 00:03:42.418 LIB libspdk_event_ublk.a 00:03:42.418 SO libspdk_event_nbd.so.6.0 00:03:42.418 LIB libspdk_event_scsi.a 00:03:42.418 SO libspdk_event_ublk.so.3.0 00:03:42.418 SO libspdk_event_scsi.so.6.0 00:03:42.418 SYMLINK libspdk_event_nbd.so 00:03:42.687 LIB libspdk_event_nvmf.a 00:03:42.687 SO libspdk_event_nvmf.so.6.0 00:03:42.687 SYMLINK libspdk_event_ublk.so 00:03:42.687 SYMLINK libspdk_event_scsi.so 00:03:42.687 SYMLINK libspdk_event_nvmf.so 00:03:43.036 CC module/event/subsystems/iscsi/iscsi.o 00:03:43.036 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:43.037 LIB libspdk_event_vhost_scsi.a 00:03:43.037 LIB libspdk_event_iscsi.a 00:03:43.347 SO libspdk_event_vhost_scsi.so.3.0 00:03:43.347 SO libspdk_event_iscsi.so.6.0 00:03:43.347 SYMLINK libspdk_event_vhost_scsi.so 00:03:43.347 SYMLINK libspdk_event_iscsi.so 00:03:43.347 SO libspdk.so.6.0 00:03:43.347 SYMLINK libspdk.so 00:03:43.919 CC app/spdk_top/spdk_top.o 00:03:43.919 CXX app/trace/trace.o 00:03:43.919 CC app/spdk_lspci/spdk_lspci.o 00:03:43.919 CC app/spdk_nvme_discover/discovery_aer.o 00:03:43.919 CC app/spdk_nvme_perf/perf.o 00:03:43.919 CC app/trace_record/trace_record.o 00:03:43.919 CC app/spdk_nvme_identify/identify.o 00:03:43.919 TEST_HEADER include/spdk/accel.h 00:03:43.919 CC test/rpc_client/rpc_client_test.o 00:03:43.919 TEST_HEADER include/spdk/accel_module.h 00:03:43.919 TEST_HEADER include/spdk/barrier.h 00:03:43.919 TEST_HEADER include/spdk/assert.h 00:03:43.919 TEST_HEADER include/spdk/base64.h 00:03:43.919 TEST_HEADER include/spdk/bdev.h 00:03:43.919 TEST_HEADER include/spdk/bdev_module.h 00:03:43.919 TEST_HEADER include/spdk/bdev_zone.h 00:03:43.919 TEST_HEADER include/spdk/bit_array.h 00:03:43.919 TEST_HEADER include/spdk/bit_pool.h 00:03:43.919 TEST_HEADER include/spdk/blob_bdev.h 00:03:43.919 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:43.919 TEST_HEADER include/spdk/blobfs.h 00:03:43.919 TEST_HEADER include/spdk/blob.h 00:03:43.919 TEST_HEADER include/spdk/conf.h 00:03:43.919 TEST_HEADER include/spdk/config.h 00:03:43.919 TEST_HEADER include/spdk/cpuset.h 00:03:43.919 TEST_HEADER include/spdk/crc16.h 00:03:43.919 TEST_HEADER include/spdk/crc32.h 00:03:43.919 TEST_HEADER include/spdk/dif.h 00:03:43.919 TEST_HEADER include/spdk/crc64.h 00:03:43.919 TEST_HEADER include/spdk/endian.h 00:03:43.919 TEST_HEADER include/spdk/dma.h 00:03:43.919 CC app/spdk_dd/spdk_dd.o 00:03:43.919 TEST_HEADER include/spdk/env_dpdk.h 00:03:43.919 TEST_HEADER include/spdk/env.h 00:03:43.919 TEST_HEADER include/spdk/event.h 00:03:43.919 TEST_HEADER include/spdk/fd_group.h 00:03:43.919 TEST_HEADER include/spdk/fd.h 00:03:43.919 TEST_HEADER include/spdk/file.h 00:03:43.919 TEST_HEADER include/spdk/fsdev.h 00:03:43.919 TEST_HEADER include/spdk/ftl.h 00:03:43.919 TEST_HEADER include/spdk/fsdev_module.h 00:03:43.919 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:43.919 CC app/iscsi_tgt/iscsi_tgt.o 00:03:43.919 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:43.919 TEST_HEADER include/spdk/gpt_spec.h 00:03:43.919 TEST_HEADER include/spdk/hexlify.h 00:03:43.919 CC app/nvmf_tgt/nvmf_main.o 00:03:43.919 TEST_HEADER include/spdk/histogram_data.h 00:03:43.919 TEST_HEADER include/spdk/idxd.h 00:03:43.919 TEST_HEADER include/spdk/idxd_spec.h 00:03:43.919 CC app/spdk_tgt/spdk_tgt.o 00:03:43.919 TEST_HEADER include/spdk/init.h 00:03:43.919 TEST_HEADER include/spdk/ioat.h 00:03:43.919 TEST_HEADER include/spdk/ioat_spec.h 00:03:43.919 TEST_HEADER include/spdk/iscsi_spec.h 00:03:43.919 TEST_HEADER include/spdk/json.h 00:03:43.919 TEST_HEADER include/spdk/jsonrpc.h 00:03:43.919 TEST_HEADER include/spdk/keyring.h 00:03:43.919 TEST_HEADER include/spdk/keyring_module.h 00:03:43.919 TEST_HEADER include/spdk/likely.h 00:03:43.919 TEST_HEADER include/spdk/lvol.h 00:03:43.919 TEST_HEADER include/spdk/log.h 00:03:43.919 TEST_HEADER include/spdk/md5.h 00:03:43.919 TEST_HEADER include/spdk/memory.h 00:03:43.919 TEST_HEADER include/spdk/mmio.h 00:03:43.919 TEST_HEADER include/spdk/nbd.h 00:03:43.919 TEST_HEADER include/spdk/net.h 00:03:43.919 TEST_HEADER include/spdk/nvme.h 00:03:43.919 TEST_HEADER include/spdk/notify.h 00:03:43.919 TEST_HEADER include/spdk/nvme_intel.h 00:03:43.919 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:43.919 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:43.919 TEST_HEADER include/spdk/nvme_spec.h 00:03:43.919 TEST_HEADER include/spdk/nvme_zns.h 00:03:43.919 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:43.919 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:43.919 TEST_HEADER include/spdk/nvmf.h 00:03:43.919 TEST_HEADER include/spdk/nvmf_spec.h 00:03:43.919 TEST_HEADER include/spdk/nvmf_transport.h 00:03:43.919 TEST_HEADER include/spdk/opal.h 00:03:43.919 TEST_HEADER include/spdk/opal_spec.h 00:03:43.919 TEST_HEADER include/spdk/pci_ids.h 00:03:43.919 TEST_HEADER include/spdk/pipe.h 00:03:43.919 TEST_HEADER include/spdk/queue.h 00:03:43.919 TEST_HEADER include/spdk/reduce.h 00:03:43.919 TEST_HEADER include/spdk/rpc.h 00:03:43.919 TEST_HEADER include/spdk/scheduler.h 00:03:43.919 TEST_HEADER include/spdk/scsi.h 00:03:43.919 TEST_HEADER include/spdk/scsi_spec.h 00:03:43.919 TEST_HEADER include/spdk/sock.h 00:03:43.919 TEST_HEADER include/spdk/stdinc.h 00:03:43.919 TEST_HEADER include/spdk/string.h 00:03:43.919 TEST_HEADER include/spdk/thread.h 00:03:43.919 TEST_HEADER include/spdk/trace.h 00:03:43.919 TEST_HEADER include/spdk/trace_parser.h 00:03:43.919 TEST_HEADER include/spdk/tree.h 00:03:43.919 TEST_HEADER include/spdk/ublk.h 00:03:43.919 TEST_HEADER include/spdk/util.h 00:03:43.919 TEST_HEADER include/spdk/uuid.h 00:03:43.919 TEST_HEADER include/spdk/version.h 00:03:43.919 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:43.919 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:43.919 TEST_HEADER include/spdk/vhost.h 00:03:43.919 TEST_HEADER include/spdk/vmd.h 00:03:43.919 TEST_HEADER include/spdk/xor.h 00:03:43.919 TEST_HEADER include/spdk/zipf.h 00:03:43.919 CXX test/cpp_headers/accel.o 00:03:43.919 CXX test/cpp_headers/accel_module.o 00:03:43.919 CXX test/cpp_headers/barrier.o 00:03:43.919 CXX test/cpp_headers/assert.o 00:03:43.919 CXX test/cpp_headers/base64.o 00:03:43.919 CXX test/cpp_headers/bdev_module.o 00:03:43.919 CXX test/cpp_headers/bdev.o 00:03:43.919 CXX test/cpp_headers/bdev_zone.o 00:03:43.919 CXX test/cpp_headers/bit_array.o 00:03:43.919 CXX test/cpp_headers/bit_pool.o 00:03:43.919 CXX test/cpp_headers/blob_bdev.o 00:03:43.919 CXX test/cpp_headers/blobfs_bdev.o 00:03:43.919 CXX test/cpp_headers/blob.o 00:03:43.919 CXX test/cpp_headers/blobfs.o 00:03:43.919 CXX test/cpp_headers/conf.o 00:03:43.919 CXX test/cpp_headers/config.o 00:03:43.919 CXX test/cpp_headers/cpuset.o 00:03:43.919 CXX test/cpp_headers/crc16.o 00:03:43.919 CXX test/cpp_headers/crc64.o 00:03:43.919 CXX test/cpp_headers/crc32.o 00:03:43.919 CXX test/cpp_headers/dma.o 00:03:43.919 CXX test/cpp_headers/dif.o 00:03:43.919 CXX test/cpp_headers/endian.o 00:03:43.919 CXX test/cpp_headers/env_dpdk.o 00:03:43.919 CXX test/cpp_headers/env.o 00:03:43.920 CXX test/cpp_headers/event.o 00:03:43.920 CXX test/cpp_headers/fd.o 00:03:43.920 CXX test/cpp_headers/fd_group.o 00:03:43.920 CXX test/cpp_headers/file.o 00:03:43.920 CXX test/cpp_headers/fsdev_module.o 00:03:43.920 CXX test/cpp_headers/fsdev.o 00:03:43.920 CXX test/cpp_headers/ftl.o 00:03:43.920 CXX test/cpp_headers/hexlify.o 00:03:43.920 CXX test/cpp_headers/fuse_dispatcher.o 00:03:43.920 CXX test/cpp_headers/gpt_spec.o 00:03:43.920 CXX test/cpp_headers/histogram_data.o 00:03:43.920 CXX test/cpp_headers/idxd.o 00:03:43.920 CXX test/cpp_headers/init.o 00:03:43.920 CXX test/cpp_headers/idxd_spec.o 00:03:43.920 CXX test/cpp_headers/ioat.o 00:03:43.920 CXX test/cpp_headers/iscsi_spec.o 00:03:43.920 CXX test/cpp_headers/ioat_spec.o 00:03:43.920 CXX test/cpp_headers/json.o 00:03:43.920 CXX test/cpp_headers/keyring_module.o 00:03:43.920 CXX test/cpp_headers/jsonrpc.o 00:03:43.920 CXX test/cpp_headers/lvol.o 00:03:43.920 CXX test/cpp_headers/log.o 00:03:43.920 CXX test/cpp_headers/keyring.o 00:03:43.920 CXX test/cpp_headers/likely.o 00:03:43.920 CXX test/cpp_headers/md5.o 00:03:44.179 LINK spdk_lspci 00:03:44.179 CC test/thread/poller_perf/poller_perf.o 00:03:44.179 CXX test/cpp_headers/nbd.o 00:03:44.179 CXX test/cpp_headers/memory.o 00:03:44.179 CXX test/cpp_headers/net.o 00:03:44.179 CXX test/cpp_headers/mmio.o 00:03:44.179 CXX test/cpp_headers/nvme.o 00:03:44.179 CXX test/cpp_headers/nvme_intel.o 00:03:44.179 CXX test/cpp_headers/notify.o 00:03:44.179 CXX test/cpp_headers/nvme_ocssd.o 00:03:44.179 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:44.179 CXX test/cpp_headers/nvme_spec.o 00:03:44.179 CXX test/cpp_headers/nvmf_cmd.o 00:03:44.179 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:44.179 CXX test/cpp_headers/nvme_zns.o 00:03:44.179 CC test/env/pci/pci_ut.o 00:03:44.179 CC test/env/vtophys/vtophys.o 00:03:44.179 CXX test/cpp_headers/opal_spec.o 00:03:44.179 CXX test/cpp_headers/nvmf_spec.o 00:03:44.179 CXX test/cpp_headers/nvmf.o 00:03:44.179 CXX test/cpp_headers/nvmf_transport.o 00:03:44.179 CXX test/cpp_headers/opal.o 00:03:44.179 CXX test/cpp_headers/pci_ids.o 00:03:44.179 CXX test/cpp_headers/pipe.o 00:03:44.179 CXX test/cpp_headers/queue.o 00:03:44.179 CXX test/cpp_headers/rpc.o 00:03:44.179 CXX test/cpp_headers/scsi_spec.o 00:03:44.179 CXX test/cpp_headers/scheduler.o 00:03:44.179 CC test/app/histogram_perf/histogram_perf.o 00:03:44.179 CXX test/cpp_headers/reduce.o 00:03:44.179 CXX test/cpp_headers/scsi.o 00:03:44.179 CC test/app/stub/stub.o 00:03:44.179 CXX test/cpp_headers/string.o 00:03:44.179 CXX test/cpp_headers/sock.o 00:03:44.179 CXX test/cpp_headers/stdinc.o 00:03:44.179 CXX test/cpp_headers/thread.o 00:03:44.179 CXX test/cpp_headers/trace.o 00:03:44.179 CXX test/cpp_headers/trace_parser.o 00:03:44.179 CXX test/cpp_headers/tree.o 00:03:44.179 CXX test/cpp_headers/ublk.o 00:03:44.179 CC examples/ioat/perf/perf.o 00:03:44.179 CXX test/cpp_headers/util.o 00:03:44.179 CXX test/cpp_headers/uuid.o 00:03:44.179 CXX test/cpp_headers/vfio_user_pci.o 00:03:44.179 CC app/fio/nvme/fio_plugin.o 00:03:44.179 CC test/app/jsoncat/jsoncat.o 00:03:44.179 CXX test/cpp_headers/version.o 00:03:44.179 CXX test/cpp_headers/vmd.o 00:03:44.179 CXX test/cpp_headers/vhost.o 00:03:44.179 CC examples/ioat/verify/verify.o 00:03:44.179 CXX test/cpp_headers/zipf.o 00:03:44.179 CXX test/cpp_headers/vfio_user_spec.o 00:03:44.179 CXX test/cpp_headers/xor.o 00:03:44.179 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:44.179 CC test/env/memory/memory_ut.o 00:03:44.179 CC examples/util/zipf/zipf.o 00:03:44.179 CC app/fio/bdev/fio_plugin.o 00:03:44.179 CC test/dma/test_dma/test_dma.o 00:03:44.179 LINK rpc_client_test 00:03:44.179 CC test/app/bdev_svc/bdev_svc.o 00:03:44.179 LINK spdk_nvme_discover 00:03:44.179 LINK nvmf_tgt 00:03:44.179 LINK interrupt_tgt 00:03:44.179 LINK spdk_trace_record 00:03:44.440 LINK iscsi_tgt 00:03:44.440 LINK spdk_tgt 00:03:44.440 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:44.440 CC test/env/mem_callbacks/mem_callbacks.o 00:03:44.440 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:44.440 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:44.440 LINK spdk_dd 00:03:44.440 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:44.440 LINK jsoncat 00:03:44.440 LINK poller_perf 00:03:44.698 LINK spdk_trace 00:03:44.698 LINK vtophys 00:03:44.698 LINK histogram_perf 00:03:44.698 LINK stub 00:03:44.698 LINK bdev_svc 00:03:44.698 LINK zipf 00:03:45.000 LINK verify 00:03:45.000 LINK env_dpdk_post_init 00:03:45.000 LINK ioat_perf 00:03:45.000 LINK pci_ut 00:03:45.000 CC app/vhost/vhost.o 00:03:45.000 CC test/event/event_perf/event_perf.o 00:03:45.000 CC test/event/reactor_perf/reactor_perf.o 00:03:45.000 CC test/event/reactor/reactor.o 00:03:45.000 CC test/event/app_repeat/app_repeat.o 00:03:45.000 LINK vhost_fuzz 00:03:45.000 CC test/event/scheduler/scheduler.o 00:03:45.000 LINK nvme_fuzz 00:03:45.259 LINK spdk_bdev 00:03:45.259 LINK spdk_nvme 00:03:45.259 LINK spdk_top 00:03:45.259 LINK event_perf 00:03:45.259 LINK reactor_perf 00:03:45.259 LINK reactor 00:03:45.259 LINK vhost 00:03:45.259 LINK test_dma 00:03:45.259 LINK app_repeat 00:03:45.260 LINK spdk_nvme_identify 00:03:45.260 CC examples/vmd/led/led.o 00:03:45.260 LINK mem_callbacks 00:03:45.260 LINK spdk_nvme_perf 00:03:45.260 CC examples/idxd/perf/perf.o 00:03:45.260 CC examples/vmd/lsvmd/lsvmd.o 00:03:45.260 CC examples/sock/hello_world/hello_sock.o 00:03:45.260 LINK scheduler 00:03:45.260 CC examples/thread/thread/thread_ex.o 00:03:45.520 LINK led 00:03:45.520 LINK lsvmd 00:03:45.520 LINK memory_ut 00:03:45.520 LINK hello_sock 00:03:45.520 LINK idxd_perf 00:03:45.781 LINK thread 00:03:45.781 CC test/nvme/sgl/sgl.o 00:03:45.781 CC test/nvme/startup/startup.o 00:03:45.781 CC test/nvme/boot_partition/boot_partition.o 00:03:45.781 CC test/nvme/overhead/overhead.o 00:03:45.781 CC test/nvme/aer/aer.o 00:03:45.781 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:45.781 CC test/nvme/err_injection/err_injection.o 00:03:45.781 CC test/nvme/reset/reset.o 00:03:45.781 CC test/nvme/cuse/cuse.o 00:03:45.781 CC test/nvme/simple_copy/simple_copy.o 00:03:45.781 CC test/nvme/e2edp/nvme_dp.o 00:03:45.781 CC test/nvme/compliance/nvme_compliance.o 00:03:45.781 CC test/nvme/connect_stress/connect_stress.o 00:03:45.781 CC test/nvme/fused_ordering/fused_ordering.o 00:03:45.781 CC test/nvme/reserve/reserve.o 00:03:45.781 CC test/nvme/fdp/fdp.o 00:03:45.781 CC test/accel/dif/dif.o 00:03:45.781 CC test/blobfs/mkfs/mkfs.o 00:03:46.042 CC test/lvol/esnap/esnap.o 00:03:46.042 LINK boot_partition 00:03:46.042 LINK startup 00:03:46.042 LINK doorbell_aers 00:03:46.042 LINK err_injection 00:03:46.042 LINK fused_ordering 00:03:46.042 LINK connect_stress 00:03:46.042 CC examples/nvme/hello_world/hello_world.o 00:03:46.042 LINK reserve 00:03:46.042 LINK sgl 00:03:46.042 CC examples/nvme/reconnect/reconnect.o 00:03:46.042 LINK simple_copy 00:03:46.042 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:46.042 CC examples/nvme/arbitration/arbitration.o 00:03:46.042 CC examples/nvme/abort/abort.o 00:03:46.042 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:46.042 LINK nvme_dp 00:03:46.042 LINK reset 00:03:46.042 LINK iscsi_fuzz 00:03:46.042 CC examples/nvme/hotplug/hotplug.o 00:03:46.042 LINK overhead 00:03:46.042 LINK aer 00:03:46.042 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:46.042 LINK mkfs 00:03:46.042 LINK nvme_compliance 00:03:46.304 LINK fdp 00:03:46.304 CC examples/accel/perf/accel_perf.o 00:03:46.304 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:46.304 CC examples/blob/hello_world/hello_blob.o 00:03:46.304 CC examples/blob/cli/blobcli.o 00:03:46.304 LINK cmb_copy 00:03:46.304 LINK hello_world 00:03:46.304 LINK pmr_persistence 00:03:46.304 LINK hotplug 00:03:46.304 LINK arbitration 00:03:46.564 LINK reconnect 00:03:46.564 LINK abort 00:03:46.564 LINK dif 00:03:46.564 LINK hello_blob 00:03:46.564 LINK hello_fsdev 00:03:46.564 LINK nvme_manage 00:03:46.564 LINK accel_perf 00:03:46.825 LINK blobcli 00:03:47.086 LINK cuse 00:03:47.086 CC test/bdev/bdevio/bdevio.o 00:03:47.347 CC examples/bdev/hello_world/hello_bdev.o 00:03:47.347 CC examples/bdev/bdevperf/bdevperf.o 00:03:47.347 LINK hello_bdev 00:03:47.608 LINK bdevio 00:03:48.181 LINK bdevperf 00:03:48.442 CC examples/nvmf/nvmf/nvmf.o 00:03:49.015 LINK nvmf 00:03:50.404 LINK esnap 00:03:50.665 00:03:50.665 real 0m54.287s 00:03:50.665 user 7m47.958s 00:03:50.665 sys 4m29.623s 00:03:50.665 20:56:52 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:50.665 20:56:52 make -- common/autotest_common.sh@10 -- $ set +x 00:03:50.665 ************************************ 00:03:50.665 END TEST make 00:03:50.665 ************************************ 00:03:50.925 20:56:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:50.925 20:56:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:50.925 20:56:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:50.925 20:56:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.925 20:56:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:50.925 20:56:52 -- pm/common@44 -- $ pid=1755303 00:03:50.925 20:56:52 -- pm/common@50 -- $ kill -TERM 1755303 00:03:50.925 20:56:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.925 20:56:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:50.925 20:56:52 -- pm/common@44 -- $ pid=1755304 00:03:50.925 20:56:52 -- pm/common@50 -- $ kill -TERM 1755304 00:03:50.925 20:56:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.925 20:56:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:50.926 20:56:52 -- pm/common@44 -- $ pid=1755306 00:03:50.926 20:56:52 -- pm/common@50 -- $ kill -TERM 1755306 00:03:50.926 20:56:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.926 20:56:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:50.926 20:56:52 -- pm/common@44 -- $ pid=1755331 00:03:50.926 20:56:52 -- pm/common@50 -- $ sudo -E kill -TERM 1755331 00:03:50.926 20:56:52 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:50.926 20:56:52 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:50.926 20:56:52 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:50.926 20:56:52 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:50.926 20:56:52 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:50.926 20:56:52 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:50.926 20:56:52 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.926 20:56:52 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.926 20:56:52 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.926 20:56:52 -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.926 20:56:52 -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.926 20:56:52 -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.926 20:56:52 -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.926 20:56:52 -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.926 20:56:52 -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.926 20:56:52 -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.926 20:56:52 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.926 20:56:52 -- scripts/common.sh@344 -- # case "$op" in 00:03:50.926 20:56:52 -- scripts/common.sh@345 -- # : 1 00:03:50.926 20:56:52 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.926 20:56:52 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.926 20:56:52 -- scripts/common.sh@365 -- # decimal 1 00:03:50.926 20:56:52 -- scripts/common.sh@353 -- # local d=1 00:03:50.926 20:56:52 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.926 20:56:52 -- scripts/common.sh@355 -- # echo 1 00:03:50.926 20:56:52 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.926 20:56:52 -- scripts/common.sh@366 -- # decimal 2 00:03:50.926 20:56:52 -- scripts/common.sh@353 -- # local d=2 00:03:50.926 20:56:52 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.926 20:56:52 -- scripts/common.sh@355 -- # echo 2 00:03:50.926 20:56:52 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.926 20:56:52 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.926 20:56:52 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.926 20:56:52 -- scripts/common.sh@368 -- # return 0 00:03:50.926 20:56:52 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.926 20:56:52 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:50.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.926 --rc genhtml_branch_coverage=1 00:03:50.926 --rc genhtml_function_coverage=1 00:03:50.926 --rc genhtml_legend=1 00:03:50.926 --rc geninfo_all_blocks=1 00:03:50.926 --rc geninfo_unexecuted_blocks=1 00:03:50.926 00:03:50.926 ' 00:03:50.926 20:56:52 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:50.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.926 --rc genhtml_branch_coverage=1 00:03:50.926 --rc genhtml_function_coverage=1 00:03:50.926 --rc genhtml_legend=1 00:03:50.926 --rc geninfo_all_blocks=1 00:03:50.926 --rc geninfo_unexecuted_blocks=1 00:03:50.926 00:03:50.926 ' 00:03:50.926 20:56:52 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:50.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.926 --rc genhtml_branch_coverage=1 00:03:50.926 --rc genhtml_function_coverage=1 00:03:50.926 --rc genhtml_legend=1 00:03:50.926 --rc geninfo_all_blocks=1 00:03:50.926 --rc geninfo_unexecuted_blocks=1 00:03:50.926 00:03:50.926 ' 00:03:50.926 20:56:52 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:50.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.926 --rc genhtml_branch_coverage=1 00:03:50.926 --rc genhtml_function_coverage=1 00:03:50.926 --rc genhtml_legend=1 00:03:50.926 --rc geninfo_all_blocks=1 00:03:50.926 --rc geninfo_unexecuted_blocks=1 00:03:50.926 00:03:50.926 ' 00:03:50.926 20:56:52 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:50.926 20:56:52 -- nvmf/common.sh@7 -- # uname -s 00:03:50.926 20:56:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:50.926 20:56:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:50.926 20:56:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:50.926 20:56:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:50.926 20:56:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:50.926 20:56:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:50.926 20:56:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:50.926 20:56:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:50.926 20:56:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:50.926 20:56:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:51.188 20:56:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:51.188 20:56:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:51.188 20:56:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:51.188 20:56:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:51.188 20:56:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:51.188 20:56:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:51.188 20:56:52 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:51.188 20:56:52 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:51.188 20:56:52 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:51.188 20:56:52 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:51.188 20:56:52 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:51.188 20:56:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.188 20:56:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.188 20:56:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.188 20:56:52 -- paths/export.sh@5 -- # export PATH 00:03:51.188 20:56:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.188 20:56:52 -- nvmf/common.sh@51 -- # : 0 00:03:51.188 20:56:52 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:51.188 20:56:52 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:51.188 20:56:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:51.188 20:56:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:51.188 20:56:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:51.188 20:56:52 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:51.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:51.188 20:56:52 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:51.188 20:56:52 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:51.188 20:56:52 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:51.188 20:56:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:51.188 20:56:52 -- spdk/autotest.sh@32 -- # uname -s 00:03:51.188 20:56:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:51.188 20:56:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:51.188 20:56:52 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:51.188 20:56:52 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:51.188 20:56:52 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:51.188 20:56:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:51.188 20:56:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:51.188 20:56:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:51.188 20:56:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:51.188 20:56:52 -- spdk/autotest.sh@48 -- # udevadm_pid=1820532 00:03:51.188 20:56:52 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:51.188 20:56:52 -- pm/common@17 -- # local monitor 00:03:51.188 20:56:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.188 20:56:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.188 20:56:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.188 20:56:52 -- pm/common@21 -- # date +%s 00:03:51.188 20:56:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.188 20:56:52 -- pm/common@21 -- # date +%s 00:03:51.188 20:56:52 -- pm/common@25 -- # sleep 1 00:03:51.188 20:56:52 -- pm/common@21 -- # date +%s 00:03:51.188 20:56:52 -- pm/common@21 -- # date +%s 00:03:51.188 20:56:52 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733428612 00:03:51.188 20:56:52 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733428612 00:03:51.188 20:56:52 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733428612 00:03:51.188 20:56:52 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733428612 00:03:51.188 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733428612_collect-vmstat.pm.log 00:03:51.188 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733428612_collect-cpu-load.pm.log 00:03:51.188 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733428612_collect-cpu-temp.pm.log 00:03:51.188 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733428612_collect-bmc-pm.bmc.pm.log 00:03:52.133 20:56:53 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:52.133 20:56:53 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:52.133 20:56:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:52.133 20:56:53 -- common/autotest_common.sh@10 -- # set +x 00:03:52.133 20:56:53 -- spdk/autotest.sh@59 -- # create_test_list 00:03:52.133 20:56:53 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:52.133 20:56:53 -- common/autotest_common.sh@10 -- # set +x 00:03:52.133 20:56:53 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:52.133 20:56:53 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.133 20:56:53 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.133 20:56:53 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:52.133 20:56:53 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.133 20:56:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:52.133 20:56:53 -- common/autotest_common.sh@1457 -- # uname 00:03:52.133 20:56:53 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:52.133 20:56:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:52.133 20:56:53 -- common/autotest_common.sh@1477 -- # uname 00:03:52.133 20:56:53 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:52.133 20:56:53 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:52.133 20:56:53 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:52.133 lcov: LCOV version 1.15 00:03:52.133 20:56:53 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:07.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:07.047 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:25.174 20:57:23 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:25.174 20:57:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.174 20:57:23 -- common/autotest_common.sh@10 -- # set +x 00:04:25.174 20:57:23 -- spdk/autotest.sh@78 -- # rm -f 00:04:25.174 20:57:23 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.118 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:26.118 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:26.118 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:26.379 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:26.379 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:26.379 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:26.379 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:26.379 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:26.379 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:26.379 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:26.379 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:26.379 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:26.379 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:26.379 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:26.641 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:26.641 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:26.641 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:26.901 20:57:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:26.901 20:57:28 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:26.901 20:57:28 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:26.901 20:57:28 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:26.901 20:57:28 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:26.901 20:57:28 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:26.901 20:57:28 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:26.901 20:57:28 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:04:26.901 20:57:28 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:26.901 20:57:28 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:26.901 20:57:28 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:26.901 20:57:28 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.901 20:57:28 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:26.901 20:57:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:26.901 20:57:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:26.901 20:57:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:26.901 20:57:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:26.901 20:57:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:26.901 20:57:28 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:26.901 No valid GPT data, bailing 00:04:26.901 20:57:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:26.901 20:57:28 -- scripts/common.sh@394 -- # pt= 00:04:26.901 20:57:28 -- scripts/common.sh@395 -- # return 1 00:04:26.901 20:57:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:26.901 1+0 records in 00:04:26.901 1+0 records out 00:04:26.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00399284 s, 263 MB/s 00:04:26.901 20:57:28 -- spdk/autotest.sh@105 -- # sync 00:04:26.901 20:57:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:26.901 20:57:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:26.901 20:57:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:35.049 20:57:36 -- spdk/autotest.sh@111 -- # uname -s 00:04:35.049 20:57:36 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:35.049 20:57:36 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:35.049 20:57:36 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:39.259 Hugepages 00:04:39.259 node hugesize free / total 00:04:39.259 node0 1048576kB 0 / 0 00:04:39.259 node0 2048kB 0 / 0 00:04:39.259 node1 1048576kB 0 / 0 00:04:39.259 node1 2048kB 0 / 0 00:04:39.259 00:04:39.259 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:39.259 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:39.259 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:39.259 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:39.259 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:39.259 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:39.259 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:39.259 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:39.259 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:39.259 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:39.259 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:39.259 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:39.259 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:39.259 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:39.259 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:39.259 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:39.259 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:39.259 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:39.259 20:57:40 -- spdk/autotest.sh@117 -- # uname -s 00:04:39.259 20:57:40 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:39.259 20:57:40 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:39.259 20:57:40 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:43.471 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:43.471 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:43.471 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:43.471 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:43.471 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:43.471 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:43.471 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:43.471 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:43.471 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:43.471 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:43.471 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:43.471 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:43.471 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:43.471 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:43.471 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:43.471 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:44.850 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:45.110 20:57:46 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:46.494 20:57:47 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:46.494 20:57:47 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:46.494 20:57:47 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:46.494 20:57:47 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:46.494 20:57:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:46.494 20:57:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:46.494 20:57:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:46.494 20:57:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:46.494 20:57:47 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:46.494 20:57:47 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:46.494 20:57:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:46.494 20:57:47 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.886 Waiting for block devices as requested 00:04:49.886 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:49.886 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:49.886 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:49.886 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:50.148 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:50.148 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:50.148 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:50.409 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:50.409 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:50.670 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:50.670 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:50.670 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:50.670 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:50.961 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:50.961 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:50.961 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:50.961 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:51.223 20:57:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:51.223 20:57:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:51.223 20:57:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:51.223 20:57:52 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:04:51.223 20:57:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:51.223 20:57:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:51.223 20:57:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:51.223 20:57:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:51.223 20:57:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:51.223 20:57:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:51.223 20:57:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:51.223 20:57:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:51.223 20:57:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:51.485 20:57:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:51.485 20:57:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:51.485 20:57:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:51.485 20:57:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:51.485 20:57:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:51.485 20:57:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:51.485 20:57:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:51.485 20:57:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:51.485 20:57:52 -- common/autotest_common.sh@1543 -- # continue 00:04:51.485 20:57:52 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:51.485 20:57:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:51.485 20:57:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.485 20:57:52 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:51.485 20:57:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.485 20:57:52 -- common/autotest_common.sh@10 -- # set +x 00:04:51.485 20:57:52 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.788 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:54.788 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:55.359 20:57:56 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:55.359 20:57:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:55.359 20:57:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.359 20:57:56 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:55.359 20:57:56 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:55.359 20:57:56 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:55.359 20:57:56 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:55.359 20:57:56 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:55.359 20:57:56 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:55.359 20:57:56 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:55.359 20:57:56 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:55.359 20:57:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:55.359 20:57:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:55.359 20:57:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.359 20:57:56 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:55.359 20:57:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:55.359 20:57:56 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:55.359 20:57:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:55.359 20:57:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:55.359 20:57:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:55.359 20:57:56 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:55.359 20:57:56 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:55.359 20:57:56 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:55.359 20:57:56 -- common/autotest_common.sh@1572 -- # return 0 00:04:55.359 20:57:56 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:55.359 20:57:56 -- common/autotest_common.sh@1580 -- # return 0 00:04:55.359 20:57:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:55.359 20:57:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:55.359 20:57:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:55.359 20:57:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:55.359 20:57:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:55.359 20:57:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.359 20:57:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.359 20:57:56 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:55.359 20:57:56 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:55.359 20:57:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.359 20:57:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.359 20:57:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.359 ************************************ 00:04:55.359 START TEST env 00:04:55.359 ************************************ 00:04:55.359 20:57:56 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:55.621 * Looking for test storage... 00:04:55.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:55.621 20:57:56 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.621 20:57:56 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.621 20:57:56 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.621 20:57:56 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.621 20:57:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.621 20:57:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.621 20:57:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.621 20:57:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.621 20:57:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.622 20:57:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.622 20:57:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.622 20:57:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.622 20:57:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.622 20:57:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.622 20:57:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.622 20:57:56 env -- scripts/common.sh@344 -- # case "$op" in 00:04:55.622 20:57:56 env -- scripts/common.sh@345 -- # : 1 00:04:55.622 20:57:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.622 20:57:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.622 20:57:56 env -- scripts/common.sh@365 -- # decimal 1 00:04:55.622 20:57:56 env -- scripts/common.sh@353 -- # local d=1 00:04:55.622 20:57:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.622 20:57:56 env -- scripts/common.sh@355 -- # echo 1 00:04:55.622 20:57:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.622 20:57:56 env -- scripts/common.sh@366 -- # decimal 2 00:04:55.622 20:57:56 env -- scripts/common.sh@353 -- # local d=2 00:04:55.622 20:57:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.622 20:57:56 env -- scripts/common.sh@355 -- # echo 2 00:04:55.622 20:57:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.622 20:57:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.622 20:57:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.622 20:57:56 env -- scripts/common.sh@368 -- # return 0 00:04:55.622 20:57:56 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.622 20:57:56 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.622 --rc genhtml_branch_coverage=1 00:04:55.622 --rc genhtml_function_coverage=1 00:04:55.622 --rc genhtml_legend=1 00:04:55.622 --rc geninfo_all_blocks=1 00:04:55.622 --rc geninfo_unexecuted_blocks=1 00:04:55.622 00:04:55.622 ' 00:04:55.622 20:57:56 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.622 --rc genhtml_branch_coverage=1 00:04:55.622 --rc genhtml_function_coverage=1 00:04:55.622 --rc genhtml_legend=1 00:04:55.622 --rc geninfo_all_blocks=1 00:04:55.622 --rc geninfo_unexecuted_blocks=1 00:04:55.622 00:04:55.622 ' 00:04:55.622 20:57:56 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.622 --rc genhtml_branch_coverage=1 00:04:55.622 --rc genhtml_function_coverage=1 00:04:55.622 --rc genhtml_legend=1 00:04:55.622 --rc geninfo_all_blocks=1 00:04:55.622 --rc geninfo_unexecuted_blocks=1 00:04:55.622 00:04:55.622 ' 00:04:55.622 20:57:56 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.622 --rc genhtml_branch_coverage=1 00:04:55.622 --rc genhtml_function_coverage=1 00:04:55.622 --rc genhtml_legend=1 00:04:55.622 --rc geninfo_all_blocks=1 00:04:55.622 --rc geninfo_unexecuted_blocks=1 00:04:55.622 00:04:55.622 ' 00:04:55.622 20:57:56 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:55.622 20:57:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.622 20:57:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.622 20:57:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.622 ************************************ 00:04:55.622 START TEST env_memory 00:04:55.622 ************************************ 00:04:55.622 20:57:57 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:55.622 00:04:55.622 00:04:55.622 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.622 http://cunit.sourceforge.net/ 00:04:55.622 00:04:55.622 00:04:55.622 Suite: memory 00:04:55.883 Test: alloc and free memory map ...[2024-12-05 20:57:57.093067] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:55.883 passed 00:04:55.883 Test: mem map translation ...[2024-12-05 20:57:57.118946] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:55.883 [2024-12-05 20:57:57.118980] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:55.883 [2024-12-05 20:57:57.119026] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:55.883 [2024-12-05 20:57:57.119039] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:55.883 passed 00:04:55.883 Test: mem map registration ...[2024-12-05 20:57:57.174329] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:55.883 [2024-12-05 20:57:57.174346] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:55.883 passed 00:04:55.883 Test: mem map adjacent registrations ...passed 00:04:55.883 00:04:55.883 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.883 suites 1 1 n/a 0 0 00:04:55.883 tests 4 4 4 0 0 00:04:55.883 asserts 152 152 152 0 n/a 00:04:55.883 00:04:55.883 Elapsed time = 0.194 seconds 00:04:55.883 00:04:55.883 real 0m0.208s 00:04:55.883 user 0m0.198s 00:04:55.883 sys 0m0.010s 00:04:55.883 20:57:57 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.883 20:57:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:55.883 ************************************ 00:04:55.883 END TEST env_memory 00:04:55.883 ************************************ 00:04:55.883 20:57:57 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:55.883 20:57:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.883 20:57:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.883 20:57:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.144 ************************************ 00:04:56.144 START TEST env_vtophys 00:04:56.144 ************************************ 00:04:56.144 20:57:57 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:56.144 EAL: lib.eal log level changed from notice to debug 00:04:56.144 EAL: Detected lcore 0 as core 0 on socket 0 00:04:56.144 EAL: Detected lcore 1 as core 1 on socket 0 00:04:56.144 EAL: Detected lcore 2 as core 2 on socket 0 00:04:56.144 EAL: Detected lcore 3 as core 3 on socket 0 00:04:56.144 EAL: Detected lcore 4 as core 4 on socket 0 00:04:56.144 EAL: Detected lcore 5 as core 5 on socket 0 00:04:56.144 EAL: Detected lcore 6 as core 6 on socket 0 00:04:56.144 EAL: Detected lcore 7 as core 7 on socket 0 00:04:56.145 EAL: Detected lcore 8 as core 8 on socket 0 00:04:56.145 EAL: Detected lcore 9 as core 9 on socket 0 00:04:56.145 EAL: Detected lcore 10 as core 10 on socket 0 00:04:56.145 EAL: Detected lcore 11 as core 11 on socket 0 00:04:56.145 EAL: Detected lcore 12 as core 12 on socket 0 00:04:56.145 EAL: Detected lcore 13 as core 13 on socket 0 00:04:56.145 EAL: Detected lcore 14 as core 14 on socket 0 00:04:56.145 EAL: Detected lcore 15 as core 15 on socket 0 00:04:56.145 EAL: Detected lcore 16 as core 16 on socket 0 00:04:56.145 EAL: Detected lcore 17 as core 17 on socket 0 00:04:56.145 EAL: Detected lcore 18 as core 18 on socket 0 00:04:56.145 EAL: Detected lcore 19 as core 19 on socket 0 00:04:56.145 EAL: Detected lcore 20 as core 20 on socket 0 00:04:56.145 EAL: Detected lcore 21 as core 21 on socket 0 00:04:56.145 EAL: Detected lcore 22 as core 22 on socket 0 00:04:56.145 EAL: Detected lcore 23 as core 23 on socket 0 00:04:56.145 EAL: Detected lcore 24 as core 24 on socket 0 00:04:56.145 EAL: Detected lcore 25 as core 25 on socket 0 00:04:56.145 EAL: Detected lcore 26 as core 26 on socket 0 00:04:56.145 EAL: Detected lcore 27 as core 27 on socket 0 00:04:56.145 EAL: Detected lcore 28 as core 28 on socket 0 00:04:56.145 EAL: Detected lcore 29 as core 29 on socket 0 00:04:56.145 EAL: Detected lcore 30 as core 30 on socket 0 00:04:56.145 EAL: Detected lcore 31 as core 31 on socket 0 00:04:56.145 EAL: Detected lcore 32 as core 32 on socket 0 00:04:56.145 EAL: Detected lcore 33 as core 33 on socket 0 00:04:56.145 EAL: Detected lcore 34 as core 34 on socket 0 00:04:56.145 EAL: Detected lcore 35 as core 35 on socket 0 00:04:56.145 EAL: Detected lcore 36 as core 0 on socket 1 00:04:56.145 EAL: Detected lcore 37 as core 1 on socket 1 00:04:56.145 EAL: Detected lcore 38 as core 2 on socket 1 00:04:56.145 EAL: Detected lcore 39 as core 3 on socket 1 00:04:56.145 EAL: Detected lcore 40 as core 4 on socket 1 00:04:56.145 EAL: Detected lcore 41 as core 5 on socket 1 00:04:56.145 EAL: Detected lcore 42 as core 6 on socket 1 00:04:56.145 EAL: Detected lcore 43 as core 7 on socket 1 00:04:56.145 EAL: Detected lcore 44 as core 8 on socket 1 00:04:56.145 EAL: Detected lcore 45 as core 9 on socket 1 00:04:56.145 EAL: Detected lcore 46 as core 10 on socket 1 00:04:56.145 EAL: Detected lcore 47 as core 11 on socket 1 00:04:56.145 EAL: Detected lcore 48 as core 12 on socket 1 00:04:56.145 EAL: Detected lcore 49 as core 13 on socket 1 00:04:56.145 EAL: Detected lcore 50 as core 14 on socket 1 00:04:56.145 EAL: Detected lcore 51 as core 15 on socket 1 00:04:56.145 EAL: Detected lcore 52 as core 16 on socket 1 00:04:56.145 EAL: Detected lcore 53 as core 17 on socket 1 00:04:56.145 EAL: Detected lcore 54 as core 18 on socket 1 00:04:56.145 EAL: Detected lcore 55 as core 19 on socket 1 00:04:56.145 EAL: Detected lcore 56 as core 20 on socket 1 00:04:56.145 EAL: Detected lcore 57 as core 21 on socket 1 00:04:56.145 EAL: Detected lcore 58 as core 22 on socket 1 00:04:56.145 EAL: Detected lcore 59 as core 23 on socket 1 00:04:56.145 EAL: Detected lcore 60 as core 24 on socket 1 00:04:56.145 EAL: Detected lcore 61 as core 25 on socket 1 00:04:56.145 EAL: Detected lcore 62 as core 26 on socket 1 00:04:56.145 EAL: Detected lcore 63 as core 27 on socket 1 00:04:56.145 EAL: Detected lcore 64 as core 28 on socket 1 00:04:56.145 EAL: Detected lcore 65 as core 29 on socket 1 00:04:56.145 EAL: Detected lcore 66 as core 30 on socket 1 00:04:56.145 EAL: Detected lcore 67 as core 31 on socket 1 00:04:56.145 EAL: Detected lcore 68 as core 32 on socket 1 00:04:56.145 EAL: Detected lcore 69 as core 33 on socket 1 00:04:56.145 EAL: Detected lcore 70 as core 34 on socket 1 00:04:56.145 EAL: Detected lcore 71 as core 35 on socket 1 00:04:56.145 EAL: Detected lcore 72 as core 0 on socket 0 00:04:56.145 EAL: Detected lcore 73 as core 1 on socket 0 00:04:56.145 EAL: Detected lcore 74 as core 2 on socket 0 00:04:56.145 EAL: Detected lcore 75 as core 3 on socket 0 00:04:56.145 EAL: Detected lcore 76 as core 4 on socket 0 00:04:56.145 EAL: Detected lcore 77 as core 5 on socket 0 00:04:56.145 EAL: Detected lcore 78 as core 6 on socket 0 00:04:56.145 EAL: Detected lcore 79 as core 7 on socket 0 00:04:56.145 EAL: Detected lcore 80 as core 8 on socket 0 00:04:56.145 EAL: Detected lcore 81 as core 9 on socket 0 00:04:56.145 EAL: Detected lcore 82 as core 10 on socket 0 00:04:56.145 EAL: Detected lcore 83 as core 11 on socket 0 00:04:56.145 EAL: Detected lcore 84 as core 12 on socket 0 00:04:56.145 EAL: Detected lcore 85 as core 13 on socket 0 00:04:56.145 EAL: Detected lcore 86 as core 14 on socket 0 00:04:56.145 EAL: Detected lcore 87 as core 15 on socket 0 00:04:56.145 EAL: Detected lcore 88 as core 16 on socket 0 00:04:56.145 EAL: Detected lcore 89 as core 17 on socket 0 00:04:56.145 EAL: Detected lcore 90 as core 18 on socket 0 00:04:56.145 EAL: Detected lcore 91 as core 19 on socket 0 00:04:56.145 EAL: Detected lcore 92 as core 20 on socket 0 00:04:56.145 EAL: Detected lcore 93 as core 21 on socket 0 00:04:56.145 EAL: Detected lcore 94 as core 22 on socket 0 00:04:56.145 EAL: Detected lcore 95 as core 23 on socket 0 00:04:56.145 EAL: Detected lcore 96 as core 24 on socket 0 00:04:56.145 EAL: Detected lcore 97 as core 25 on socket 0 00:04:56.145 EAL: Detected lcore 98 as core 26 on socket 0 00:04:56.145 EAL: Detected lcore 99 as core 27 on socket 0 00:04:56.145 EAL: Detected lcore 100 as core 28 on socket 0 00:04:56.145 EAL: Detected lcore 101 as core 29 on socket 0 00:04:56.145 EAL: Detected lcore 102 as core 30 on socket 0 00:04:56.145 EAL: Detected lcore 103 as core 31 on socket 0 00:04:56.145 EAL: Detected lcore 104 as core 32 on socket 0 00:04:56.145 EAL: Detected lcore 105 as core 33 on socket 0 00:04:56.145 EAL: Detected lcore 106 as core 34 on socket 0 00:04:56.145 EAL: Detected lcore 107 as core 35 on socket 0 00:04:56.145 EAL: Detected lcore 108 as core 0 on socket 1 00:04:56.145 EAL: Detected lcore 109 as core 1 on socket 1 00:04:56.145 EAL: Detected lcore 110 as core 2 on socket 1 00:04:56.145 EAL: Detected lcore 111 as core 3 on socket 1 00:04:56.145 EAL: Detected lcore 112 as core 4 on socket 1 00:04:56.145 EAL: Detected lcore 113 as core 5 on socket 1 00:04:56.145 EAL: Detected lcore 114 as core 6 on socket 1 00:04:56.145 EAL: Detected lcore 115 as core 7 on socket 1 00:04:56.145 EAL: Detected lcore 116 as core 8 on socket 1 00:04:56.145 EAL: Detected lcore 117 as core 9 on socket 1 00:04:56.145 EAL: Detected lcore 118 as core 10 on socket 1 00:04:56.145 EAL: Detected lcore 119 as core 11 on socket 1 00:04:56.145 EAL: Detected lcore 120 as core 12 on socket 1 00:04:56.145 EAL: Detected lcore 121 as core 13 on socket 1 00:04:56.145 EAL: Detected lcore 122 as core 14 on socket 1 00:04:56.145 EAL: Detected lcore 123 as core 15 on socket 1 00:04:56.145 EAL: Detected lcore 124 as core 16 on socket 1 00:04:56.145 EAL: Detected lcore 125 as core 17 on socket 1 00:04:56.145 EAL: Detected lcore 126 as core 18 on socket 1 00:04:56.145 EAL: Detected lcore 127 as core 19 on socket 1 00:04:56.145 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:56.145 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:56.145 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:56.145 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:56.145 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:56.145 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:56.145 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:56.145 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:56.145 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:56.145 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:56.145 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:56.145 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:56.145 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:56.145 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:56.145 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:56.145 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:56.145 EAL: Maximum logical cores by configuration: 128 00:04:56.145 EAL: Detected CPU lcores: 128 00:04:56.145 EAL: Detected NUMA nodes: 2 00:04:56.145 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:56.145 EAL: Detected shared linkage of DPDK 00:04:56.145 EAL: No shared files mode enabled, IPC will be disabled 00:04:56.145 EAL: Bus pci wants IOVA as 'DC' 00:04:56.145 EAL: Buses did not request a specific IOVA mode. 00:04:56.145 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:56.145 EAL: Selected IOVA mode 'VA' 00:04:56.145 EAL: Probing VFIO support... 00:04:56.145 EAL: IOMMU type 1 (Type 1) is supported 00:04:56.145 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:56.145 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:56.145 EAL: VFIO support initialized 00:04:56.145 EAL: Ask a virtual area of 0x2e000 bytes 00:04:56.145 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:56.145 EAL: Setting up physically contiguous memory... 00:04:56.145 EAL: Setting maximum number of open files to 524288 00:04:56.145 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:56.145 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:56.145 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:56.145 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.145 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:56.145 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.145 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.145 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:56.145 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:56.145 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.145 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:56.145 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.145 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.145 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:56.145 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:56.145 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.145 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:56.145 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.145 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.145 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:56.145 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:56.145 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.145 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:56.145 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.145 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.145 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:56.145 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:56.145 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:56.145 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.145 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:56.146 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.146 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.146 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:56.146 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:56.146 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.146 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:56.146 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.146 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.146 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:56.146 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:56.146 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.146 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:56.146 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.146 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.146 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:56.146 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:56.146 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.146 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:56.146 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.146 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.146 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:56.146 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:56.146 EAL: Hugepages will be freed exactly as allocated. 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: TSC frequency is ~2400000 KHz 00:04:56.146 EAL: Main lcore 0 is ready (tid=7f0b16fcca00;cpuset=[0]) 00:04:56.146 EAL: Trying to obtain current memory policy. 00:04:56.146 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.146 EAL: Restoring previous memory policy: 0 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was expanded by 2MB 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:56.146 EAL: Mem event callback 'spdk:(nil)' registered 00:04:56.146 00:04:56.146 00:04:56.146 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.146 http://cunit.sourceforge.net/ 00:04:56.146 00:04:56.146 00:04:56.146 Suite: components_suite 00:04:56.146 Test: vtophys_malloc_test ...passed 00:04:56.146 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:56.146 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.146 EAL: Restoring previous memory policy: 4 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was expanded by 4MB 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was shrunk by 4MB 00:04:56.146 EAL: Trying to obtain current memory policy. 00:04:56.146 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.146 EAL: Restoring previous memory policy: 4 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was expanded by 6MB 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was shrunk by 6MB 00:04:56.146 EAL: Trying to obtain current memory policy. 00:04:56.146 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.146 EAL: Restoring previous memory policy: 4 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was expanded by 10MB 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was shrunk by 10MB 00:04:56.146 EAL: Trying to obtain current memory policy. 00:04:56.146 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.146 EAL: Restoring previous memory policy: 4 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was expanded by 18MB 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was shrunk by 18MB 00:04:56.146 EAL: Trying to obtain current memory policy. 00:04:56.146 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.146 EAL: Restoring previous memory policy: 4 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was expanded by 34MB 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was shrunk by 34MB 00:04:56.146 EAL: Trying to obtain current memory policy. 00:04:56.146 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.146 EAL: Restoring previous memory policy: 4 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was expanded by 66MB 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was shrunk by 66MB 00:04:56.146 EAL: Trying to obtain current memory policy. 00:04:56.146 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.146 EAL: Restoring previous memory policy: 4 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was expanded by 130MB 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was shrunk by 130MB 00:04:56.146 EAL: Trying to obtain current memory policy. 00:04:56.146 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.146 EAL: Restoring previous memory policy: 4 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.146 EAL: request: mp_malloc_sync 00:04:56.146 EAL: No shared files mode enabled, IPC is disabled 00:04:56.146 EAL: Heap on socket 0 was expanded by 258MB 00:04:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.407 EAL: request: mp_malloc_sync 00:04:56.407 EAL: No shared files mode enabled, IPC is disabled 00:04:56.407 EAL: Heap on socket 0 was shrunk by 258MB 00:04:56.407 EAL: Trying to obtain current memory policy. 00:04:56.407 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.407 EAL: Restoring previous memory policy: 4 00:04:56.407 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.407 EAL: request: mp_malloc_sync 00:04:56.407 EAL: No shared files mode enabled, IPC is disabled 00:04:56.407 EAL: Heap on socket 0 was expanded by 514MB 00:04:56.407 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.407 EAL: request: mp_malloc_sync 00:04:56.407 EAL: No shared files mode enabled, IPC is disabled 00:04:56.407 EAL: Heap on socket 0 was shrunk by 514MB 00:04:56.407 EAL: Trying to obtain current memory policy. 00:04:56.407 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.668 EAL: Restoring previous memory policy: 4 00:04:56.668 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.668 EAL: request: mp_malloc_sync 00:04:56.668 EAL: No shared files mode enabled, IPC is disabled 00:04:56.668 EAL: Heap on socket 0 was expanded by 1026MB 00:04:56.668 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.931 EAL: request: mp_malloc_sync 00:04:56.931 EAL: No shared files mode enabled, IPC is disabled 00:04:56.931 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:56.931 passed 00:04:56.931 00:04:56.931 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.931 suites 1 1 n/a 0 0 00:04:56.931 tests 2 2 2 0 0 00:04:56.931 asserts 497 497 497 0 n/a 00:04:56.931 00:04:56.931 Elapsed time = 0.650 seconds 00:04:56.931 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.931 EAL: request: mp_malloc_sync 00:04:56.931 EAL: No shared files mode enabled, IPC is disabled 00:04:56.931 EAL: Heap on socket 0 was shrunk by 2MB 00:04:56.931 EAL: No shared files mode enabled, IPC is disabled 00:04:56.931 EAL: No shared files mode enabled, IPC is disabled 00:04:56.931 EAL: No shared files mode enabled, IPC is disabled 00:04:56.931 00:04:56.931 real 0m0.792s 00:04:56.931 user 0m0.420s 00:04:56.931 sys 0m0.343s 00:04:56.931 20:57:58 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.931 20:57:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:56.931 ************************************ 00:04:56.931 END TEST env_vtophys 00:04:56.931 ************************************ 00:04:56.931 20:57:58 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:56.931 20:57:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.931 20:57:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.931 20:57:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.931 ************************************ 00:04:56.931 START TEST env_pci 00:04:56.931 ************************************ 00:04:56.931 20:57:58 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:56.931 00:04:56.931 00:04:56.931 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.931 http://cunit.sourceforge.net/ 00:04:56.931 00:04:56.931 00:04:56.931 Suite: pci 00:04:56.931 Test: pci_hook ...[2024-12-05 20:57:58.215370] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1840985 has claimed it 00:04:56.931 EAL: Cannot find device (10000:00:01.0) 00:04:56.931 EAL: Failed to attach device on primary process 00:04:56.931 passed 00:04:56.931 00:04:56.931 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.931 suites 1 1 n/a 0 0 00:04:56.931 tests 1 1 1 0 0 00:04:56.931 asserts 25 25 25 0 n/a 00:04:56.931 00:04:56.931 Elapsed time = 0.033 seconds 00:04:56.931 00:04:56.931 real 0m0.053s 00:04:56.931 user 0m0.021s 00:04:56.931 sys 0m0.032s 00:04:56.931 20:57:58 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.931 20:57:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:56.931 ************************************ 00:04:56.931 END TEST env_pci 00:04:56.931 ************************************ 00:04:56.931 20:57:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:56.931 20:57:58 env -- env/env.sh@15 -- # uname 00:04:56.931 20:57:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:56.931 20:57:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:56.931 20:57:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.931 20:57:58 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:56.931 20:57:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.931 20:57:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.931 ************************************ 00:04:56.931 START TEST env_dpdk_post_init 00:04:56.931 ************************************ 00:04:56.931 20:57:58 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:57.193 EAL: Detected CPU lcores: 128 00:04:57.193 EAL: Detected NUMA nodes: 2 00:04:57.193 EAL: Detected shared linkage of DPDK 00:04:57.193 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:57.193 EAL: Selected IOVA mode 'VA' 00:04:57.193 EAL: VFIO support initialized 00:04:57.193 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:57.193 EAL: Using IOMMU type 1 (Type 1) 00:04:57.193 EAL: Ignore mapping IO port bar(1) 00:04:57.454 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:57.454 EAL: Ignore mapping IO port bar(1) 00:04:57.715 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:57.715 EAL: Ignore mapping IO port bar(1) 00:04:57.975 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:57.975 EAL: Ignore mapping IO port bar(1) 00:04:57.975 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:58.235 EAL: Ignore mapping IO port bar(1) 00:04:58.235 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:58.495 EAL: Ignore mapping IO port bar(1) 00:04:58.495 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:58.755 EAL: Ignore mapping IO port bar(1) 00:04:58.755 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:58.755 EAL: Ignore mapping IO port bar(1) 00:04:59.017 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:59.278 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:59.278 EAL: Ignore mapping IO port bar(1) 00:04:59.539 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:59.539 EAL: Ignore mapping IO port bar(1) 00:04:59.539 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:59.800 EAL: Ignore mapping IO port bar(1) 00:04:59.800 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:00.061 EAL: Ignore mapping IO port bar(1) 00:05:00.061 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:00.322 EAL: Ignore mapping IO port bar(1) 00:05:00.322 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:00.322 EAL: Ignore mapping IO port bar(1) 00:05:00.584 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:00.584 EAL: Ignore mapping IO port bar(1) 00:05:00.845 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:00.845 EAL: Ignore mapping IO port bar(1) 00:05:01.106 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:01.106 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:01.106 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:01.106 Starting DPDK initialization... 00:05:01.106 Starting SPDK post initialization... 00:05:01.106 SPDK NVMe probe 00:05:01.106 Attaching to 0000:65:00.0 00:05:01.106 Attached to 0000:65:00.0 00:05:01.106 Cleaning up... 00:05:03.021 00:05:03.021 real 0m5.741s 00:05:03.021 user 0m0.101s 00:05:03.021 sys 0m0.188s 00:05:03.021 20:58:04 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.021 20:58:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:03.021 ************************************ 00:05:03.021 END TEST env_dpdk_post_init 00:05:03.021 ************************************ 00:05:03.021 20:58:04 env -- env/env.sh@26 -- # uname 00:05:03.021 20:58:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:03.021 20:58:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:03.021 20:58:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.021 20:58:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.021 20:58:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.021 ************************************ 00:05:03.021 START TEST env_mem_callbacks 00:05:03.021 ************************************ 00:05:03.021 20:58:04 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:03.021 EAL: Detected CPU lcores: 128 00:05:03.021 EAL: Detected NUMA nodes: 2 00:05:03.021 EAL: Detected shared linkage of DPDK 00:05:03.021 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:03.021 EAL: Selected IOVA mode 'VA' 00:05:03.021 EAL: VFIO support initialized 00:05:03.021 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:03.021 00:05:03.021 00:05:03.021 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.021 http://cunit.sourceforge.net/ 00:05:03.021 00:05:03.021 00:05:03.021 Suite: memory 00:05:03.021 Test: test ... 00:05:03.021 register 0x200000200000 2097152 00:05:03.021 malloc 3145728 00:05:03.021 register 0x200000400000 4194304 00:05:03.021 buf 0x200000500000 len 3145728 PASSED 00:05:03.021 malloc 64 00:05:03.021 buf 0x2000004fff40 len 64 PASSED 00:05:03.021 malloc 4194304 00:05:03.021 register 0x200000800000 6291456 00:05:03.021 buf 0x200000a00000 len 4194304 PASSED 00:05:03.021 free 0x200000500000 3145728 00:05:03.021 free 0x2000004fff40 64 00:05:03.021 unregister 0x200000400000 4194304 PASSED 00:05:03.021 free 0x200000a00000 4194304 00:05:03.021 unregister 0x200000800000 6291456 PASSED 00:05:03.021 malloc 8388608 00:05:03.021 register 0x200000400000 10485760 00:05:03.021 buf 0x200000600000 len 8388608 PASSED 00:05:03.021 free 0x200000600000 8388608 00:05:03.021 unregister 0x200000400000 10485760 PASSED 00:05:03.021 passed 00:05:03.021 00:05:03.021 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.021 suites 1 1 n/a 0 0 00:05:03.021 tests 1 1 1 0 0 00:05:03.021 asserts 15 15 15 0 n/a 00:05:03.021 00:05:03.021 Elapsed time = 0.008 seconds 00:05:03.021 00:05:03.021 real 0m0.069s 00:05:03.021 user 0m0.023s 00:05:03.021 sys 0m0.047s 00:05:03.021 20:58:04 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.021 20:58:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:03.021 ************************************ 00:05:03.021 END TEST env_mem_callbacks 00:05:03.021 ************************************ 00:05:03.021 00:05:03.021 real 0m7.489s 00:05:03.021 user 0m1.028s 00:05:03.021 sys 0m1.013s 00:05:03.021 20:58:04 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.021 20:58:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.021 ************************************ 00:05:03.021 END TEST env 00:05:03.021 ************************************ 00:05:03.021 20:58:04 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:03.022 20:58:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.022 20:58:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.022 20:58:04 -- common/autotest_common.sh@10 -- # set +x 00:05:03.022 ************************************ 00:05:03.022 START TEST rpc 00:05:03.022 ************************************ 00:05:03.022 20:58:04 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:03.022 * Looking for test storage... 00:05:03.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.022 20:58:04 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.022 20:58:04 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.282 20:58:04 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.282 20:58:04 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.282 20:58:04 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.282 20:58:04 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.282 20:58:04 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.282 20:58:04 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.282 20:58:04 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.282 20:58:04 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.282 20:58:04 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.282 20:58:04 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.282 20:58:04 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.282 20:58:04 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.282 20:58:04 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.282 20:58:04 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:03.282 20:58:04 rpc -- scripts/common.sh@345 -- # : 1 00:05:03.282 20:58:04 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.282 20:58:04 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.282 20:58:04 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:03.282 20:58:04 rpc -- scripts/common.sh@353 -- # local d=1 00:05:03.282 20:58:04 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.282 20:58:04 rpc -- scripts/common.sh@355 -- # echo 1 00:05:03.282 20:58:04 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.282 20:58:04 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:03.282 20:58:04 rpc -- scripts/common.sh@353 -- # local d=2 00:05:03.282 20:58:04 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.282 20:58:04 rpc -- scripts/common.sh@355 -- # echo 2 00:05:03.282 20:58:04 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.282 20:58:04 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.282 20:58:04 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.282 20:58:04 rpc -- scripts/common.sh@368 -- # return 0 00:05:03.282 20:58:04 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.282 20:58:04 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.282 --rc genhtml_branch_coverage=1 00:05:03.282 --rc genhtml_function_coverage=1 00:05:03.283 --rc genhtml_legend=1 00:05:03.283 --rc geninfo_all_blocks=1 00:05:03.283 --rc geninfo_unexecuted_blocks=1 00:05:03.283 00:05:03.283 ' 00:05:03.283 20:58:04 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.283 --rc genhtml_branch_coverage=1 00:05:03.283 --rc genhtml_function_coverage=1 00:05:03.283 --rc genhtml_legend=1 00:05:03.283 --rc geninfo_all_blocks=1 00:05:03.283 --rc geninfo_unexecuted_blocks=1 00:05:03.283 00:05:03.283 ' 00:05:03.283 20:58:04 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.283 --rc genhtml_branch_coverage=1 00:05:03.283 --rc genhtml_function_coverage=1 00:05:03.283 --rc genhtml_legend=1 00:05:03.283 --rc geninfo_all_blocks=1 00:05:03.283 --rc geninfo_unexecuted_blocks=1 00:05:03.283 00:05:03.283 ' 00:05:03.283 20:58:04 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.283 --rc genhtml_branch_coverage=1 00:05:03.283 --rc genhtml_function_coverage=1 00:05:03.283 --rc genhtml_legend=1 00:05:03.283 --rc geninfo_all_blocks=1 00:05:03.283 --rc geninfo_unexecuted_blocks=1 00:05:03.283 00:05:03.283 ' 00:05:03.283 20:58:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1842430 00:05:03.283 20:58:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.283 20:58:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1842430 00:05:03.283 20:58:04 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:03.283 20:58:04 rpc -- common/autotest_common.sh@835 -- # '[' -z 1842430 ']' 00:05:03.283 20:58:04 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.283 20:58:04 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.283 20:58:04 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.283 20:58:04 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.283 20:58:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.283 [2024-12-05 20:58:04.609176] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:03.283 [2024-12-05 20:58:04.609233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1842430 ] 00:05:03.283 [2024-12-05 20:58:04.688340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.542 [2024-12-05 20:58:04.723572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:03.542 [2024-12-05 20:58:04.723606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1842430' to capture a snapshot of events at runtime. 00:05:03.542 [2024-12-05 20:58:04.723614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:03.542 [2024-12-05 20:58:04.723621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:03.542 [2024-12-05 20:58:04.723626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1842430 for offline analysis/debug. 00:05:03.542 [2024-12-05 20:58:04.724209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.111 20:58:05 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.111 20:58:05 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:04.111 20:58:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:04.111 20:58:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:04.111 20:58:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:04.111 20:58:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:04.111 20:58:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.111 20:58:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.111 20:58:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.111 ************************************ 00:05:04.111 START TEST rpc_integrity 00:05:04.111 ************************************ 00:05:04.111 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:04.111 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:04.111 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.111 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.111 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.111 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:04.111 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:04.111 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:04.111 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:04.111 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.111 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.111 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.111 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:04.111 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:04.111 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.111 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.111 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.111 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:04.111 { 00:05:04.111 "name": "Malloc0", 00:05:04.111 "aliases": [ 00:05:04.111 "b102348c-282b-423f-9b85-85345e2d473f" 00:05:04.111 ], 00:05:04.111 "product_name": "Malloc disk", 00:05:04.111 "block_size": 512, 00:05:04.111 "num_blocks": 16384, 00:05:04.111 "uuid": "b102348c-282b-423f-9b85-85345e2d473f", 00:05:04.111 "assigned_rate_limits": { 00:05:04.111 "rw_ios_per_sec": 0, 00:05:04.111 "rw_mbytes_per_sec": 0, 00:05:04.111 "r_mbytes_per_sec": 0, 00:05:04.111 "w_mbytes_per_sec": 0 00:05:04.111 }, 00:05:04.111 "claimed": false, 00:05:04.111 "zoned": false, 00:05:04.111 "supported_io_types": { 00:05:04.111 "read": true, 00:05:04.111 "write": true, 00:05:04.111 "unmap": true, 00:05:04.111 "flush": true, 00:05:04.111 "reset": true, 00:05:04.111 "nvme_admin": false, 00:05:04.111 "nvme_io": false, 00:05:04.111 "nvme_io_md": false, 00:05:04.111 "write_zeroes": true, 00:05:04.111 "zcopy": true, 00:05:04.111 "get_zone_info": false, 00:05:04.111 "zone_management": false, 00:05:04.111 "zone_append": false, 00:05:04.111 "compare": false, 00:05:04.111 "compare_and_write": false, 00:05:04.111 "abort": true, 00:05:04.111 "seek_hole": false, 00:05:04.111 "seek_data": false, 00:05:04.111 "copy": true, 00:05:04.111 "nvme_iov_md": false 00:05:04.111 }, 00:05:04.111 "memory_domains": [ 00:05:04.111 { 00:05:04.111 "dma_device_id": "system", 00:05:04.111 "dma_device_type": 1 00:05:04.111 }, 00:05:04.111 { 00:05:04.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.111 "dma_device_type": 2 00:05:04.111 } 00:05:04.111 ], 00:05:04.111 "driver_specific": {} 00:05:04.111 } 00:05:04.111 ]' 00:05:04.111 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:04.371 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:04.371 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:04.371 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.371 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.371 [2024-12-05 20:58:05.570824] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:04.371 [2024-12-05 20:58:05.570858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:04.371 [2024-12-05 20:58:05.570876] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd95840 00:05:04.371 [2024-12-05 20:58:05.570884] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:04.371 [2024-12-05 20:58:05.572246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:04.371 [2024-12-05 20:58:05.572268] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:04.371 Passthru0 00:05:04.371 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.371 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:04.371 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.371 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.371 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.371 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:04.371 { 00:05:04.371 "name": "Malloc0", 00:05:04.371 "aliases": [ 00:05:04.371 "b102348c-282b-423f-9b85-85345e2d473f" 00:05:04.371 ], 00:05:04.371 "product_name": "Malloc disk", 00:05:04.371 "block_size": 512, 00:05:04.371 "num_blocks": 16384, 00:05:04.371 "uuid": "b102348c-282b-423f-9b85-85345e2d473f", 00:05:04.371 "assigned_rate_limits": { 00:05:04.371 "rw_ios_per_sec": 0, 00:05:04.371 "rw_mbytes_per_sec": 0, 00:05:04.371 "r_mbytes_per_sec": 0, 00:05:04.371 "w_mbytes_per_sec": 0 00:05:04.371 }, 00:05:04.371 "claimed": true, 00:05:04.371 "claim_type": "exclusive_write", 00:05:04.371 "zoned": false, 00:05:04.371 "supported_io_types": { 00:05:04.371 "read": true, 00:05:04.371 "write": true, 00:05:04.371 "unmap": true, 00:05:04.371 "flush": true, 00:05:04.371 "reset": true, 00:05:04.371 "nvme_admin": false, 00:05:04.371 "nvme_io": false, 00:05:04.371 "nvme_io_md": false, 00:05:04.371 "write_zeroes": true, 00:05:04.371 "zcopy": true, 00:05:04.371 "get_zone_info": false, 00:05:04.371 "zone_management": false, 00:05:04.371 "zone_append": false, 00:05:04.371 "compare": false, 00:05:04.371 "compare_and_write": false, 00:05:04.371 "abort": true, 00:05:04.371 "seek_hole": false, 00:05:04.371 "seek_data": false, 00:05:04.371 "copy": true, 00:05:04.371 "nvme_iov_md": false 00:05:04.371 }, 00:05:04.371 "memory_domains": [ 00:05:04.371 { 00:05:04.371 "dma_device_id": "system", 00:05:04.371 "dma_device_type": 1 00:05:04.371 }, 00:05:04.371 { 00:05:04.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.371 "dma_device_type": 2 00:05:04.371 } 00:05:04.371 ], 00:05:04.371 "driver_specific": {} 00:05:04.371 }, 00:05:04.371 { 00:05:04.371 "name": "Passthru0", 00:05:04.371 "aliases": [ 00:05:04.371 "e1237915-e1ff-5d0f-a70f-5600402bfdab" 00:05:04.371 ], 00:05:04.371 "product_name": "passthru", 00:05:04.371 "block_size": 512, 00:05:04.371 "num_blocks": 16384, 00:05:04.371 "uuid": "e1237915-e1ff-5d0f-a70f-5600402bfdab", 00:05:04.371 "assigned_rate_limits": { 00:05:04.371 "rw_ios_per_sec": 0, 00:05:04.371 "rw_mbytes_per_sec": 0, 00:05:04.371 "r_mbytes_per_sec": 0, 00:05:04.371 "w_mbytes_per_sec": 0 00:05:04.371 }, 00:05:04.371 "claimed": false, 00:05:04.371 "zoned": false, 00:05:04.371 "supported_io_types": { 00:05:04.371 "read": true, 00:05:04.371 "write": true, 00:05:04.371 "unmap": true, 00:05:04.371 "flush": true, 00:05:04.371 "reset": true, 00:05:04.371 "nvme_admin": false, 00:05:04.371 "nvme_io": false, 00:05:04.371 "nvme_io_md": false, 00:05:04.371 "write_zeroes": true, 00:05:04.371 "zcopy": true, 00:05:04.371 "get_zone_info": false, 00:05:04.371 "zone_management": false, 00:05:04.371 "zone_append": false, 00:05:04.371 "compare": false, 00:05:04.371 "compare_and_write": false, 00:05:04.371 "abort": true, 00:05:04.371 "seek_hole": false, 00:05:04.371 "seek_data": false, 00:05:04.371 "copy": true, 00:05:04.371 "nvme_iov_md": false 00:05:04.371 }, 00:05:04.371 "memory_domains": [ 00:05:04.371 { 00:05:04.371 "dma_device_id": "system", 00:05:04.371 "dma_device_type": 1 00:05:04.371 }, 00:05:04.371 { 00:05:04.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.371 "dma_device_type": 2 00:05:04.371 } 00:05:04.371 ], 00:05:04.371 "driver_specific": { 00:05:04.371 "passthru": { 00:05:04.371 "name": "Passthru0", 00:05:04.371 "base_bdev_name": "Malloc0" 00:05:04.371 } 00:05:04.371 } 00:05:04.371 } 00:05:04.371 ]' 00:05:04.371 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:04.371 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:04.371 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:04.371 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.371 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.371 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.371 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:04.371 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.371 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.371 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.371 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:04.371 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.372 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.372 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.372 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:04.372 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:04.372 20:58:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:04.372 00:05:04.372 real 0m0.283s 00:05:04.372 user 0m0.189s 00:05:04.372 sys 0m0.029s 00:05:04.372 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.372 20:58:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.372 ************************************ 00:05:04.372 END TEST rpc_integrity 00:05:04.372 ************************************ 00:05:04.372 20:58:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:04.372 20:58:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.372 20:58:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.372 20:58:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.372 ************************************ 00:05:04.372 START TEST rpc_plugins 00:05:04.372 ************************************ 00:05:04.372 20:58:05 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:04.372 20:58:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:04.372 20:58:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.372 20:58:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.631 20:58:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.631 20:58:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:04.631 20:58:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:04.631 20:58:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.631 20:58:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.631 20:58:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.631 20:58:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:04.631 { 00:05:04.631 "name": "Malloc1", 00:05:04.631 "aliases": [ 00:05:04.631 "176600ae-2341-48a7-baff-ca0625d0ed19" 00:05:04.631 ], 00:05:04.631 "product_name": "Malloc disk", 00:05:04.631 "block_size": 4096, 00:05:04.631 "num_blocks": 256, 00:05:04.631 "uuid": "176600ae-2341-48a7-baff-ca0625d0ed19", 00:05:04.631 "assigned_rate_limits": { 00:05:04.631 "rw_ios_per_sec": 0, 00:05:04.631 "rw_mbytes_per_sec": 0, 00:05:04.631 "r_mbytes_per_sec": 0, 00:05:04.631 "w_mbytes_per_sec": 0 00:05:04.631 }, 00:05:04.631 "claimed": false, 00:05:04.631 "zoned": false, 00:05:04.631 "supported_io_types": { 00:05:04.631 "read": true, 00:05:04.631 "write": true, 00:05:04.631 "unmap": true, 00:05:04.631 "flush": true, 00:05:04.631 "reset": true, 00:05:04.631 "nvme_admin": false, 00:05:04.631 "nvme_io": false, 00:05:04.631 "nvme_io_md": false, 00:05:04.631 "write_zeroes": true, 00:05:04.631 "zcopy": true, 00:05:04.631 "get_zone_info": false, 00:05:04.631 "zone_management": false, 00:05:04.631 "zone_append": false, 00:05:04.631 "compare": false, 00:05:04.631 "compare_and_write": false, 00:05:04.631 "abort": true, 00:05:04.631 "seek_hole": false, 00:05:04.631 "seek_data": false, 00:05:04.631 "copy": true, 00:05:04.631 "nvme_iov_md": false 00:05:04.631 }, 00:05:04.631 "memory_domains": [ 00:05:04.631 { 00:05:04.631 "dma_device_id": "system", 00:05:04.631 "dma_device_type": 1 00:05:04.631 }, 00:05:04.631 { 00:05:04.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.631 "dma_device_type": 2 00:05:04.631 } 00:05:04.631 ], 00:05:04.631 "driver_specific": {} 00:05:04.631 } 00:05:04.631 ]' 00:05:04.631 20:58:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:04.631 20:58:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:04.631 20:58:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:04.631 20:58:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.631 20:58:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.631 20:58:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.631 20:58:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:04.631 20:58:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.631 20:58:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.631 20:58:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.631 20:58:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:04.631 20:58:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:04.631 20:58:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:04.631 00:05:04.631 real 0m0.150s 00:05:04.631 user 0m0.089s 00:05:04.631 sys 0m0.023s 00:05:04.631 20:58:05 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.631 20:58:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:04.631 ************************************ 00:05:04.631 END TEST rpc_plugins 00:05:04.631 ************************************ 00:05:04.631 20:58:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:04.631 20:58:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.631 20:58:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.631 20:58:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.631 ************************************ 00:05:04.631 START TEST rpc_trace_cmd_test 00:05:04.631 ************************************ 00:05:04.631 20:58:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:04.631 20:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:04.631 20:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:04.631 20:58:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.631 20:58:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:04.631 20:58:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.631 20:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:04.631 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1842430", 00:05:04.631 "tpoint_group_mask": "0x8", 00:05:04.631 "iscsi_conn": { 00:05:04.631 "mask": "0x2", 00:05:04.631 "tpoint_mask": "0x0" 00:05:04.631 }, 00:05:04.631 "scsi": { 00:05:04.631 "mask": "0x4", 00:05:04.631 "tpoint_mask": "0x0" 00:05:04.631 }, 00:05:04.631 "bdev": { 00:05:04.631 "mask": "0x8", 00:05:04.631 "tpoint_mask": "0xffffffffffffffff" 00:05:04.631 }, 00:05:04.631 "nvmf_rdma": { 00:05:04.631 "mask": "0x10", 00:05:04.631 "tpoint_mask": "0x0" 00:05:04.631 }, 00:05:04.631 "nvmf_tcp": { 00:05:04.631 "mask": "0x20", 00:05:04.631 "tpoint_mask": "0x0" 00:05:04.631 }, 00:05:04.631 "ftl": { 00:05:04.631 "mask": "0x40", 00:05:04.631 "tpoint_mask": "0x0" 00:05:04.631 }, 00:05:04.631 "blobfs": { 00:05:04.631 "mask": "0x80", 00:05:04.631 "tpoint_mask": "0x0" 00:05:04.631 }, 00:05:04.631 "dsa": { 00:05:04.631 "mask": "0x200", 00:05:04.631 "tpoint_mask": "0x0" 00:05:04.631 }, 00:05:04.631 "thread": { 00:05:04.632 "mask": "0x400", 00:05:04.632 "tpoint_mask": "0x0" 00:05:04.632 }, 00:05:04.632 "nvme_pcie": { 00:05:04.632 "mask": "0x800", 00:05:04.632 "tpoint_mask": "0x0" 00:05:04.632 }, 00:05:04.632 "iaa": { 00:05:04.632 "mask": "0x1000", 00:05:04.632 "tpoint_mask": "0x0" 00:05:04.632 }, 00:05:04.632 "nvme_tcp": { 00:05:04.632 "mask": "0x2000", 00:05:04.632 "tpoint_mask": "0x0" 00:05:04.632 }, 00:05:04.632 "bdev_nvme": { 00:05:04.632 "mask": "0x4000", 00:05:04.632 "tpoint_mask": "0x0" 00:05:04.632 }, 00:05:04.632 "sock": { 00:05:04.632 "mask": "0x8000", 00:05:04.632 "tpoint_mask": "0x0" 00:05:04.632 }, 00:05:04.632 "blob": { 00:05:04.632 "mask": "0x10000", 00:05:04.632 "tpoint_mask": "0x0" 00:05:04.632 }, 00:05:04.632 "bdev_raid": { 00:05:04.632 "mask": "0x20000", 00:05:04.632 "tpoint_mask": "0x0" 00:05:04.632 }, 00:05:04.632 "scheduler": { 00:05:04.632 "mask": "0x40000", 00:05:04.632 "tpoint_mask": "0x0" 00:05:04.632 } 00:05:04.632 }' 00:05:04.632 20:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:04.891 20:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:04.891 20:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:04.891 20:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:04.891 20:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:04.891 20:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:04.891 20:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:04.891 20:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:04.891 20:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:04.891 20:58:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:04.891 00:05:04.891 real 0m0.213s 00:05:04.891 user 0m0.176s 00:05:04.891 sys 0m0.027s 00:05:04.891 20:58:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.891 20:58:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:04.891 ************************************ 00:05:04.891 END TEST rpc_trace_cmd_test 00:05:04.891 ************************************ 00:05:04.891 20:58:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:04.891 20:58:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:04.891 20:58:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:04.891 20:58:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.891 20:58:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.891 20:58:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.891 ************************************ 00:05:04.891 START TEST rpc_daemon_integrity 00:05:04.891 ************************************ 00:05:04.891 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:04.891 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:04.891 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.891 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.891 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.891 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:05.151 { 00:05:05.151 "name": "Malloc2", 00:05:05.151 "aliases": [ 00:05:05.151 "5ee5c4d5-fc26-4f67-9fe8-9631cdd33676" 00:05:05.151 ], 00:05:05.151 "product_name": "Malloc disk", 00:05:05.151 "block_size": 512, 00:05:05.151 "num_blocks": 16384, 00:05:05.151 "uuid": "5ee5c4d5-fc26-4f67-9fe8-9631cdd33676", 00:05:05.151 "assigned_rate_limits": { 00:05:05.151 "rw_ios_per_sec": 0, 00:05:05.151 "rw_mbytes_per_sec": 0, 00:05:05.151 "r_mbytes_per_sec": 0, 00:05:05.151 "w_mbytes_per_sec": 0 00:05:05.151 }, 00:05:05.151 "claimed": false, 00:05:05.151 "zoned": false, 00:05:05.151 "supported_io_types": { 00:05:05.151 "read": true, 00:05:05.151 "write": true, 00:05:05.151 "unmap": true, 00:05:05.151 "flush": true, 00:05:05.151 "reset": true, 00:05:05.151 "nvme_admin": false, 00:05:05.151 "nvme_io": false, 00:05:05.151 "nvme_io_md": false, 00:05:05.151 "write_zeroes": true, 00:05:05.151 "zcopy": true, 00:05:05.151 "get_zone_info": false, 00:05:05.151 "zone_management": false, 00:05:05.151 "zone_append": false, 00:05:05.151 "compare": false, 00:05:05.151 "compare_and_write": false, 00:05:05.151 "abort": true, 00:05:05.151 "seek_hole": false, 00:05:05.151 "seek_data": false, 00:05:05.151 "copy": true, 00:05:05.151 "nvme_iov_md": false 00:05:05.151 }, 00:05:05.151 "memory_domains": [ 00:05:05.151 { 00:05:05.151 "dma_device_id": "system", 00:05:05.151 "dma_device_type": 1 00:05:05.151 }, 00:05:05.151 { 00:05:05.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.151 "dma_device_type": 2 00:05:05.151 } 00:05:05.151 ], 00:05:05.151 "driver_specific": {} 00:05:05.151 } 00:05:05.151 ]' 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.151 [2024-12-05 20:58:06.453164] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:05.151 [2024-12-05 20:58:06.453193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:05.151 [2024-12-05 20:58:06.453206] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xce3ff0 00:05:05.151 [2024-12-05 20:58:06.453213] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:05.151 [2024-12-05 20:58:06.454462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:05.151 [2024-12-05 20:58:06.454483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:05.151 Passthru0 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:05.151 { 00:05:05.151 "name": "Malloc2", 00:05:05.151 "aliases": [ 00:05:05.151 "5ee5c4d5-fc26-4f67-9fe8-9631cdd33676" 00:05:05.151 ], 00:05:05.151 "product_name": "Malloc disk", 00:05:05.151 "block_size": 512, 00:05:05.151 "num_blocks": 16384, 00:05:05.151 "uuid": "5ee5c4d5-fc26-4f67-9fe8-9631cdd33676", 00:05:05.151 "assigned_rate_limits": { 00:05:05.151 "rw_ios_per_sec": 0, 00:05:05.151 "rw_mbytes_per_sec": 0, 00:05:05.151 "r_mbytes_per_sec": 0, 00:05:05.151 "w_mbytes_per_sec": 0 00:05:05.151 }, 00:05:05.151 "claimed": true, 00:05:05.151 "claim_type": "exclusive_write", 00:05:05.151 "zoned": false, 00:05:05.151 "supported_io_types": { 00:05:05.151 "read": true, 00:05:05.151 "write": true, 00:05:05.151 "unmap": true, 00:05:05.151 "flush": true, 00:05:05.151 "reset": true, 00:05:05.151 "nvme_admin": false, 00:05:05.151 "nvme_io": false, 00:05:05.151 "nvme_io_md": false, 00:05:05.151 "write_zeroes": true, 00:05:05.151 "zcopy": true, 00:05:05.151 "get_zone_info": false, 00:05:05.151 "zone_management": false, 00:05:05.151 "zone_append": false, 00:05:05.151 "compare": false, 00:05:05.151 "compare_and_write": false, 00:05:05.151 "abort": true, 00:05:05.151 "seek_hole": false, 00:05:05.151 "seek_data": false, 00:05:05.151 "copy": true, 00:05:05.151 "nvme_iov_md": false 00:05:05.151 }, 00:05:05.151 "memory_domains": [ 00:05:05.151 { 00:05:05.151 "dma_device_id": "system", 00:05:05.151 "dma_device_type": 1 00:05:05.151 }, 00:05:05.151 { 00:05:05.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.151 "dma_device_type": 2 00:05:05.151 } 00:05:05.151 ], 00:05:05.151 "driver_specific": {} 00:05:05.151 }, 00:05:05.151 { 00:05:05.151 "name": "Passthru0", 00:05:05.151 "aliases": [ 00:05:05.151 "4bcc35a2-f84e-50b3-85b6-464088829292" 00:05:05.151 ], 00:05:05.151 "product_name": "passthru", 00:05:05.151 "block_size": 512, 00:05:05.151 "num_blocks": 16384, 00:05:05.151 "uuid": "4bcc35a2-f84e-50b3-85b6-464088829292", 00:05:05.151 "assigned_rate_limits": { 00:05:05.151 "rw_ios_per_sec": 0, 00:05:05.151 "rw_mbytes_per_sec": 0, 00:05:05.151 "r_mbytes_per_sec": 0, 00:05:05.151 "w_mbytes_per_sec": 0 00:05:05.151 }, 00:05:05.151 "claimed": false, 00:05:05.151 "zoned": false, 00:05:05.151 "supported_io_types": { 00:05:05.151 "read": true, 00:05:05.151 "write": true, 00:05:05.151 "unmap": true, 00:05:05.151 "flush": true, 00:05:05.151 "reset": true, 00:05:05.151 "nvme_admin": false, 00:05:05.151 "nvme_io": false, 00:05:05.151 "nvme_io_md": false, 00:05:05.151 "write_zeroes": true, 00:05:05.151 "zcopy": true, 00:05:05.151 "get_zone_info": false, 00:05:05.151 "zone_management": false, 00:05:05.151 "zone_append": false, 00:05:05.151 "compare": false, 00:05:05.151 "compare_and_write": false, 00:05:05.151 "abort": true, 00:05:05.151 "seek_hole": false, 00:05:05.151 "seek_data": false, 00:05:05.151 "copy": true, 00:05:05.151 "nvme_iov_md": false 00:05:05.151 }, 00:05:05.151 "memory_domains": [ 00:05:05.151 { 00:05:05.151 "dma_device_id": "system", 00:05:05.151 "dma_device_type": 1 00:05:05.151 }, 00:05:05.151 { 00:05:05.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:05.151 "dma_device_type": 2 00:05:05.151 } 00:05:05.151 ], 00:05:05.151 "driver_specific": { 00:05:05.151 "passthru": { 00:05:05.151 "name": "Passthru0", 00:05:05.151 "base_bdev_name": "Malloc2" 00:05:05.151 } 00:05:05.151 } 00:05:05.151 } 00:05:05.151 ]' 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.151 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:05.152 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:05.411 20:58:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:05.411 00:05:05.411 real 0m0.299s 00:05:05.411 user 0m0.186s 00:05:05.411 sys 0m0.049s 00:05:05.411 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.411 20:58:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:05.411 ************************************ 00:05:05.411 END TEST rpc_daemon_integrity 00:05:05.411 ************************************ 00:05:05.411 20:58:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:05.411 20:58:06 rpc -- rpc/rpc.sh@84 -- # killprocess 1842430 00:05:05.411 20:58:06 rpc -- common/autotest_common.sh@954 -- # '[' -z 1842430 ']' 00:05:05.411 20:58:06 rpc -- common/autotest_common.sh@958 -- # kill -0 1842430 00:05:05.411 20:58:06 rpc -- common/autotest_common.sh@959 -- # uname 00:05:05.411 20:58:06 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.411 20:58:06 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1842430 00:05:05.411 20:58:06 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.411 20:58:06 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.411 20:58:06 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1842430' 00:05:05.411 killing process with pid 1842430 00:05:05.411 20:58:06 rpc -- common/autotest_common.sh@973 -- # kill 1842430 00:05:05.411 20:58:06 rpc -- common/autotest_common.sh@978 -- # wait 1842430 00:05:05.671 00:05:05.671 real 0m2.564s 00:05:05.671 user 0m3.338s 00:05:05.671 sys 0m0.716s 00:05:05.671 20:58:06 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.671 20:58:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.671 ************************************ 00:05:05.671 END TEST rpc 00:05:05.671 ************************************ 00:05:05.671 20:58:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:05.671 20:58:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.671 20:58:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.671 20:58:06 -- common/autotest_common.sh@10 -- # set +x 00:05:05.671 ************************************ 00:05:05.671 START TEST skip_rpc 00:05:05.671 ************************************ 00:05:05.671 20:58:06 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:05.671 * Looking for test storage... 00:05:05.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:05.671 20:58:07 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.671 20:58:07 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.671 20:58:07 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.931 20:58:07 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.931 20:58:07 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:05.931 20:58:07 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.931 20:58:07 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.931 --rc genhtml_branch_coverage=1 00:05:05.931 --rc genhtml_function_coverage=1 00:05:05.931 --rc genhtml_legend=1 00:05:05.931 --rc geninfo_all_blocks=1 00:05:05.931 --rc geninfo_unexecuted_blocks=1 00:05:05.931 00:05:05.931 ' 00:05:05.931 20:58:07 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.931 --rc genhtml_branch_coverage=1 00:05:05.931 --rc genhtml_function_coverage=1 00:05:05.931 --rc genhtml_legend=1 00:05:05.931 --rc geninfo_all_blocks=1 00:05:05.931 --rc geninfo_unexecuted_blocks=1 00:05:05.931 00:05:05.931 ' 00:05:05.931 20:58:07 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.931 --rc genhtml_branch_coverage=1 00:05:05.931 --rc genhtml_function_coverage=1 00:05:05.931 --rc genhtml_legend=1 00:05:05.931 --rc geninfo_all_blocks=1 00:05:05.931 --rc geninfo_unexecuted_blocks=1 00:05:05.931 00:05:05.931 ' 00:05:05.931 20:58:07 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.931 --rc genhtml_branch_coverage=1 00:05:05.931 --rc genhtml_function_coverage=1 00:05:05.931 --rc genhtml_legend=1 00:05:05.931 --rc geninfo_all_blocks=1 00:05:05.931 --rc geninfo_unexecuted_blocks=1 00:05:05.931 00:05:05.931 ' 00:05:05.931 20:58:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:05.931 20:58:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:05.931 20:58:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:05.931 20:58:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.931 20:58:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.931 20:58:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.931 ************************************ 00:05:05.931 START TEST skip_rpc 00:05:05.931 ************************************ 00:05:05.931 20:58:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:05.931 20:58:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1843277 00:05:05.931 20:58:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.931 20:58:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:05.931 20:58:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:05.931 [2024-12-05 20:58:07.303430] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:05.931 [2024-12-05 20:58:07.303488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1843277 ] 00:05:06.190 [2024-12-05 20:58:07.385969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.190 [2024-12-05 20:58:07.427593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1843277 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1843277 ']' 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1843277 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1843277 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1843277' 00:05:11.477 killing process with pid 1843277 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1843277 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1843277 00:05:11.477 00:05:11.477 real 0m5.285s 00:05:11.477 user 0m5.070s 00:05:11.477 sys 0m0.266s 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.477 20:58:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.477 ************************************ 00:05:11.477 END TEST skip_rpc 00:05:11.477 ************************************ 00:05:11.477 20:58:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:11.477 20:58:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.477 20:58:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.477 20:58:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.477 ************************************ 00:05:11.477 START TEST skip_rpc_with_json 00:05:11.477 ************************************ 00:05:11.477 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:11.477 20:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:11.477 20:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1844316 00:05:11.477 20:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.477 20:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1844316 00:05:11.477 20:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.477 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1844316 ']' 00:05:11.477 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.477 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.477 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.477 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.477 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.477 [2024-12-05 20:58:12.644196] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:11.477 [2024-12-05 20:58:12.644232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1844316 ] 00:05:11.477 [2024-12-05 20:58:12.711942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.477 [2024-12-05 20:58:12.749150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.738 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.739 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:11.739 20:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:11.739 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.739 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.739 [2024-12-05 20:58:12.940997] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:11.739 request: 00:05:11.739 { 00:05:11.739 "trtype": "tcp", 00:05:11.739 "method": "nvmf_get_transports", 00:05:11.739 "req_id": 1 00:05:11.739 } 00:05:11.739 Got JSON-RPC error response 00:05:11.739 response: 00:05:11.739 { 00:05:11.739 "code": -19, 00:05:11.739 "message": "No such device" 00:05:11.739 } 00:05:11.739 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:11.739 20:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:11.739 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.739 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.739 [2024-12-05 20:58:12.953122] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.739 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.739 20:58:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:11.739 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.739 20:58:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.739 20:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.739 20:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:11.739 { 00:05:11.739 "subsystems": [ 00:05:11.739 { 00:05:11.739 "subsystem": "fsdev", 00:05:11.739 "config": [ 00:05:11.739 { 00:05:11.739 "method": "fsdev_set_opts", 00:05:11.739 "params": { 00:05:11.739 "fsdev_io_pool_size": 65535, 00:05:11.739 "fsdev_io_cache_size": 256 00:05:11.739 } 00:05:11.739 } 00:05:11.739 ] 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "subsystem": "vfio_user_target", 00:05:11.739 "config": null 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "subsystem": "keyring", 00:05:11.739 "config": [] 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "subsystem": "iobuf", 00:05:11.739 "config": [ 00:05:11.739 { 00:05:11.739 "method": "iobuf_set_options", 00:05:11.739 "params": { 00:05:11.739 "small_pool_count": 8192, 00:05:11.739 "large_pool_count": 1024, 00:05:11.739 "small_bufsize": 8192, 00:05:11.739 "large_bufsize": 135168, 00:05:11.739 "enable_numa": false 00:05:11.739 } 00:05:11.739 } 00:05:11.739 ] 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "subsystem": "sock", 00:05:11.739 "config": [ 00:05:11.739 { 00:05:11.739 "method": "sock_set_default_impl", 00:05:11.739 "params": { 00:05:11.739 "impl_name": "posix" 00:05:11.739 } 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "method": "sock_impl_set_options", 00:05:11.739 "params": { 00:05:11.739 "impl_name": "ssl", 00:05:11.739 "recv_buf_size": 4096, 00:05:11.739 "send_buf_size": 4096, 00:05:11.739 "enable_recv_pipe": true, 00:05:11.739 "enable_quickack": false, 00:05:11.739 "enable_placement_id": 0, 00:05:11.739 "enable_zerocopy_send_server": true, 00:05:11.739 "enable_zerocopy_send_client": false, 00:05:11.739 "zerocopy_threshold": 0, 00:05:11.739 "tls_version": 0, 00:05:11.739 "enable_ktls": false 00:05:11.739 } 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "method": "sock_impl_set_options", 00:05:11.739 "params": { 00:05:11.739 "impl_name": "posix", 00:05:11.739 "recv_buf_size": 2097152, 00:05:11.739 "send_buf_size": 2097152, 00:05:11.739 "enable_recv_pipe": true, 00:05:11.739 "enable_quickack": false, 00:05:11.739 "enable_placement_id": 0, 00:05:11.739 "enable_zerocopy_send_server": true, 00:05:11.739 "enable_zerocopy_send_client": false, 00:05:11.739 "zerocopy_threshold": 0, 00:05:11.739 "tls_version": 0, 00:05:11.739 "enable_ktls": false 00:05:11.739 } 00:05:11.739 } 00:05:11.739 ] 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "subsystem": "vmd", 00:05:11.739 "config": [] 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "subsystem": "accel", 00:05:11.739 "config": [ 00:05:11.739 { 00:05:11.739 "method": "accel_set_options", 00:05:11.739 "params": { 00:05:11.739 "small_cache_size": 128, 00:05:11.739 "large_cache_size": 16, 00:05:11.739 "task_count": 2048, 00:05:11.739 "sequence_count": 2048, 00:05:11.739 "buf_count": 2048 00:05:11.739 } 00:05:11.739 } 00:05:11.739 ] 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "subsystem": "bdev", 00:05:11.739 "config": [ 00:05:11.739 { 00:05:11.739 "method": "bdev_set_options", 00:05:11.739 "params": { 00:05:11.739 "bdev_io_pool_size": 65535, 00:05:11.739 "bdev_io_cache_size": 256, 00:05:11.739 "bdev_auto_examine": true, 00:05:11.739 "iobuf_small_cache_size": 128, 00:05:11.739 "iobuf_large_cache_size": 16 00:05:11.739 } 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "method": "bdev_raid_set_options", 00:05:11.739 "params": { 00:05:11.739 "process_window_size_kb": 1024, 00:05:11.739 "process_max_bandwidth_mb_sec": 0 00:05:11.739 } 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "method": "bdev_iscsi_set_options", 00:05:11.739 "params": { 00:05:11.739 "timeout_sec": 30 00:05:11.739 } 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "method": "bdev_nvme_set_options", 00:05:11.739 "params": { 00:05:11.739 "action_on_timeout": "none", 00:05:11.739 "timeout_us": 0, 00:05:11.739 "timeout_admin_us": 0, 00:05:11.739 "keep_alive_timeout_ms": 10000, 00:05:11.739 "arbitration_burst": 0, 00:05:11.739 "low_priority_weight": 0, 00:05:11.739 "medium_priority_weight": 0, 00:05:11.739 "high_priority_weight": 0, 00:05:11.739 "nvme_adminq_poll_period_us": 10000, 00:05:11.739 "nvme_ioq_poll_period_us": 0, 00:05:11.739 "io_queue_requests": 0, 00:05:11.739 "delay_cmd_submit": true, 00:05:11.739 "transport_retry_count": 4, 00:05:11.739 "bdev_retry_count": 3, 00:05:11.739 "transport_ack_timeout": 0, 00:05:11.739 "ctrlr_loss_timeout_sec": 0, 00:05:11.739 "reconnect_delay_sec": 0, 00:05:11.739 "fast_io_fail_timeout_sec": 0, 00:05:11.739 "disable_auto_failback": false, 00:05:11.739 "generate_uuids": false, 00:05:11.739 "transport_tos": 0, 00:05:11.739 "nvme_error_stat": false, 00:05:11.739 "rdma_srq_size": 0, 00:05:11.739 "io_path_stat": false, 00:05:11.739 "allow_accel_sequence": false, 00:05:11.739 "rdma_max_cq_size": 0, 00:05:11.739 "rdma_cm_event_timeout_ms": 0, 00:05:11.739 "dhchap_digests": [ 00:05:11.739 "sha256", 00:05:11.739 "sha384", 00:05:11.739 "sha512" 00:05:11.739 ], 00:05:11.739 "dhchap_dhgroups": [ 00:05:11.739 "null", 00:05:11.739 "ffdhe2048", 00:05:11.739 "ffdhe3072", 00:05:11.739 "ffdhe4096", 00:05:11.739 "ffdhe6144", 00:05:11.739 "ffdhe8192" 00:05:11.739 ] 00:05:11.739 } 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "method": "bdev_nvme_set_hotplug", 00:05:11.739 "params": { 00:05:11.739 "period_us": 100000, 00:05:11.739 "enable": false 00:05:11.739 } 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "method": "bdev_wait_for_examine" 00:05:11.739 } 00:05:11.739 ] 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "subsystem": "scsi", 00:05:11.739 "config": null 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "subsystem": "scheduler", 00:05:11.739 "config": [ 00:05:11.739 { 00:05:11.739 "method": "framework_set_scheduler", 00:05:11.739 "params": { 00:05:11.739 "name": "static" 00:05:11.739 } 00:05:11.739 } 00:05:11.739 ] 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "subsystem": "vhost_scsi", 00:05:11.739 "config": [] 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "subsystem": "vhost_blk", 00:05:11.739 "config": [] 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "subsystem": "ublk", 00:05:11.739 "config": [] 00:05:11.739 }, 00:05:11.739 { 00:05:11.739 "subsystem": "nbd", 00:05:11.739 "config": [] 00:05:11.739 }, 00:05:11.739 { 00:05:11.740 "subsystem": "nvmf", 00:05:11.740 "config": [ 00:05:11.740 { 00:05:11.740 "method": "nvmf_set_config", 00:05:11.740 "params": { 00:05:11.740 "discovery_filter": "match_any", 00:05:11.740 "admin_cmd_passthru": { 00:05:11.740 "identify_ctrlr": false 00:05:11.740 }, 00:05:11.740 "dhchap_digests": [ 00:05:11.740 "sha256", 00:05:11.740 "sha384", 00:05:11.740 "sha512" 00:05:11.740 ], 00:05:11.740 "dhchap_dhgroups": [ 00:05:11.740 "null", 00:05:11.740 "ffdhe2048", 00:05:11.740 "ffdhe3072", 00:05:11.740 "ffdhe4096", 00:05:11.740 "ffdhe6144", 00:05:11.740 "ffdhe8192" 00:05:11.740 ] 00:05:11.740 } 00:05:11.740 }, 00:05:11.740 { 00:05:11.740 "method": "nvmf_set_max_subsystems", 00:05:11.740 "params": { 00:05:11.740 "max_subsystems": 1024 00:05:11.740 } 00:05:11.740 }, 00:05:11.740 { 00:05:11.740 "method": "nvmf_set_crdt", 00:05:11.740 "params": { 00:05:11.740 "crdt1": 0, 00:05:11.740 "crdt2": 0, 00:05:11.740 "crdt3": 0 00:05:11.740 } 00:05:11.740 }, 00:05:11.740 { 00:05:11.740 "method": "nvmf_create_transport", 00:05:11.740 "params": { 00:05:11.740 "trtype": "TCP", 00:05:11.740 "max_queue_depth": 128, 00:05:11.740 "max_io_qpairs_per_ctrlr": 127, 00:05:11.740 "in_capsule_data_size": 4096, 00:05:11.740 "max_io_size": 131072, 00:05:11.740 "io_unit_size": 131072, 00:05:11.740 "max_aq_depth": 128, 00:05:11.740 "num_shared_buffers": 511, 00:05:11.740 "buf_cache_size": 4294967295, 00:05:11.740 "dif_insert_or_strip": false, 00:05:11.740 "zcopy": false, 00:05:11.740 "c2h_success": true, 00:05:11.740 "sock_priority": 0, 00:05:11.740 "abort_timeout_sec": 1, 00:05:11.740 "ack_timeout": 0, 00:05:11.740 "data_wr_pool_size": 0 00:05:11.740 } 00:05:11.740 } 00:05:11.740 ] 00:05:11.740 }, 00:05:11.740 { 00:05:11.740 "subsystem": "iscsi", 00:05:11.740 "config": [ 00:05:11.740 { 00:05:11.740 "method": "iscsi_set_options", 00:05:11.740 "params": { 00:05:11.740 "node_base": "iqn.2016-06.io.spdk", 00:05:11.740 "max_sessions": 128, 00:05:11.740 "max_connections_per_session": 2, 00:05:11.740 "max_queue_depth": 64, 00:05:11.740 "default_time2wait": 2, 00:05:11.740 "default_time2retain": 20, 00:05:11.740 "first_burst_length": 8192, 00:05:11.740 "immediate_data": true, 00:05:11.740 "allow_duplicated_isid": false, 00:05:11.740 "error_recovery_level": 0, 00:05:11.740 "nop_timeout": 60, 00:05:11.740 "nop_in_interval": 30, 00:05:11.740 "disable_chap": false, 00:05:11.740 "require_chap": false, 00:05:11.740 "mutual_chap": false, 00:05:11.740 "chap_group": 0, 00:05:11.740 "max_large_datain_per_connection": 64, 00:05:11.740 "max_r2t_per_connection": 4, 00:05:11.740 "pdu_pool_size": 36864, 00:05:11.740 "immediate_data_pool_size": 16384, 00:05:11.740 "data_out_pool_size": 2048 00:05:11.740 } 00:05:11.740 } 00:05:11.740 ] 00:05:11.740 } 00:05:11.740 ] 00:05:11.740 } 00:05:11.740 20:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:11.740 20:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1844316 00:05:11.740 20:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1844316 ']' 00:05:11.740 20:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1844316 00:05:11.740 20:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:11.740 20:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.740 20:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1844316 00:05:12.001 20:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.001 20:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.001 20:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1844316' 00:05:12.001 killing process with pid 1844316 00:05:12.001 20:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1844316 00:05:12.001 20:58:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1844316 00:05:12.001 20:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1844403 00:05:12.001 20:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:12.001 20:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:17.285 20:58:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1844403 00:05:17.285 20:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1844403 ']' 00:05:17.285 20:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1844403 00:05:17.285 20:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:17.285 20:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.285 20:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1844403 00:05:17.285 20:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.285 20:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.285 20:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1844403' 00:05:17.285 killing process with pid 1844403 00:05:17.285 20:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1844403 00:05:17.285 20:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1844403 00:05:17.285 20:58:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:17.285 20:58:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:17.285 00:05:17.285 real 0m6.089s 00:05:17.285 user 0m5.862s 00:05:17.285 sys 0m0.533s 00:05:17.285 20:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.285 20:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.285 ************************************ 00:05:17.285 END TEST skip_rpc_with_json 00:05:17.285 ************************************ 00:05:17.547 20:58:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:17.547 20:58:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.547 20:58:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.547 20:58:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.547 ************************************ 00:05:17.547 START TEST skip_rpc_with_delay 00:05:17.547 ************************************ 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.547 [2024-12-05 20:58:18.831992] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.547 00:05:17.547 real 0m0.082s 00:05:17.547 user 0m0.052s 00:05:17.547 sys 0m0.029s 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.547 20:58:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:17.547 ************************************ 00:05:17.547 END TEST skip_rpc_with_delay 00:05:17.547 ************************************ 00:05:17.547 20:58:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:17.547 20:58:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:17.547 20:58:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:17.547 20:58:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.547 20:58:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.547 20:58:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.547 ************************************ 00:05:17.547 START TEST exit_on_failed_rpc_init 00:05:17.547 ************************************ 00:05:17.547 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:17.547 20:58:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1845704 00:05:17.547 20:58:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1845704 00:05:17.547 20:58:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.547 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1845704 ']' 00:05:17.547 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.547 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.547 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.547 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.547 20:58:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:17.809 [2024-12-05 20:58:18.992910] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:17.809 [2024-12-05 20:58:18.992981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845704 ] 00:05:17.809 [2024-12-05 20:58:19.075689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.809 [2024-12-05 20:58:19.117503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.381 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.381 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:18.381 20:58:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.381 20:58:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.381 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:18.381 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.381 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.381 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.381 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.381 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.381 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.381 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.381 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.381 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:18.381 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.642 [2024-12-05 20:58:19.823507] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:18.642 [2024-12-05 20:58:19.823560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845738 ] 00:05:18.642 [2024-12-05 20:58:19.915295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.642 [2024-12-05 20:58:19.951723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.642 [2024-12-05 20:58:19.951777] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:18.642 [2024-12-05 20:58:19.951787] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:18.642 [2024-12-05 20:58:19.951793] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.642 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:18.642 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.642 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:18.642 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:18.642 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:18.642 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.642 20:58:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:18.642 20:58:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1845704 00:05:18.642 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1845704 ']' 00:05:18.642 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1845704 00:05:18.642 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:18.642 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.642 20:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1845704 00:05:18.642 20:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.642 20:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.642 20:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1845704' 00:05:18.642 killing process with pid 1845704 00:05:18.642 20:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1845704 00:05:18.642 20:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1845704 00:05:18.903 00:05:18.903 real 0m1.335s 00:05:18.903 user 0m1.551s 00:05:18.903 sys 0m0.385s 00:05:18.903 20:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.903 20:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.903 ************************************ 00:05:18.903 END TEST exit_on_failed_rpc_init 00:05:18.903 ************************************ 00:05:18.903 20:58:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:18.903 00:05:18.903 real 0m13.314s 00:05:18.903 user 0m12.754s 00:05:18.903 sys 0m1.541s 00:05:18.903 20:58:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.903 20:58:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.903 ************************************ 00:05:18.903 END TEST skip_rpc 00:05:18.903 ************************************ 00:05:19.164 20:58:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:19.164 20:58:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.164 20:58:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.164 20:58:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.164 ************************************ 00:05:19.164 START TEST rpc_client 00:05:19.164 ************************************ 00:05:19.164 20:58:20 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:19.164 * Looking for test storage... 00:05:19.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:19.164 20:58:20 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.164 20:58:20 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.164 20:58:20 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.164 20:58:20 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.164 20:58:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:19.164 20:58:20 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.164 20:58:20 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.164 --rc genhtml_branch_coverage=1 00:05:19.164 --rc genhtml_function_coverage=1 00:05:19.164 --rc genhtml_legend=1 00:05:19.164 --rc geninfo_all_blocks=1 00:05:19.164 --rc geninfo_unexecuted_blocks=1 00:05:19.164 00:05:19.164 ' 00:05:19.164 20:58:20 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.164 --rc genhtml_branch_coverage=1 00:05:19.164 --rc genhtml_function_coverage=1 00:05:19.164 --rc genhtml_legend=1 00:05:19.164 --rc geninfo_all_blocks=1 00:05:19.164 --rc geninfo_unexecuted_blocks=1 00:05:19.164 00:05:19.164 ' 00:05:19.164 20:58:20 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.164 --rc genhtml_branch_coverage=1 00:05:19.164 --rc genhtml_function_coverage=1 00:05:19.164 --rc genhtml_legend=1 00:05:19.165 --rc geninfo_all_blocks=1 00:05:19.165 --rc geninfo_unexecuted_blocks=1 00:05:19.165 00:05:19.165 ' 00:05:19.165 20:58:20 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.165 --rc genhtml_branch_coverage=1 00:05:19.165 --rc genhtml_function_coverage=1 00:05:19.165 --rc genhtml_legend=1 00:05:19.165 --rc geninfo_all_blocks=1 00:05:19.165 --rc geninfo_unexecuted_blocks=1 00:05:19.165 00:05:19.165 ' 00:05:19.165 20:58:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:19.425 OK 00:05:19.425 20:58:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:19.425 00:05:19.425 real 0m0.228s 00:05:19.425 user 0m0.135s 00:05:19.425 sys 0m0.108s 00:05:19.425 20:58:20 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.425 20:58:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 ************************************ 00:05:19.425 END TEST rpc_client 00:05:19.425 ************************************ 00:05:19.425 20:58:20 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:19.425 20:58:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.425 20:58:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.425 20:58:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.425 ************************************ 00:05:19.425 START TEST json_config 00:05:19.425 ************************************ 00:05:19.425 20:58:20 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:19.425 20:58:20 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.426 20:58:20 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.426 20:58:20 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.426 20:58:20 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.426 20:58:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.426 20:58:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.426 20:58:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.426 20:58:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.426 20:58:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.426 20:58:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.426 20:58:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.426 20:58:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.426 20:58:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.426 20:58:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.426 20:58:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.426 20:58:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:19.426 20:58:20 json_config -- scripts/common.sh@345 -- # : 1 00:05:19.426 20:58:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.426 20:58:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.426 20:58:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:19.426 20:58:20 json_config -- scripts/common.sh@353 -- # local d=1 00:05:19.426 20:58:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.426 20:58:20 json_config -- scripts/common.sh@355 -- # echo 1 00:05:19.426 20:58:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.687 20:58:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:19.687 20:58:20 json_config -- scripts/common.sh@353 -- # local d=2 00:05:19.687 20:58:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.687 20:58:20 json_config -- scripts/common.sh@355 -- # echo 2 00:05:19.687 20:58:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.687 20:58:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.687 20:58:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.687 20:58:20 json_config -- scripts/common.sh@368 -- # return 0 00:05:19.687 20:58:20 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.688 20:58:20 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.688 --rc genhtml_branch_coverage=1 00:05:19.688 --rc genhtml_function_coverage=1 00:05:19.688 --rc genhtml_legend=1 00:05:19.688 --rc geninfo_all_blocks=1 00:05:19.688 --rc geninfo_unexecuted_blocks=1 00:05:19.688 00:05:19.688 ' 00:05:19.688 20:58:20 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.688 --rc genhtml_branch_coverage=1 00:05:19.688 --rc genhtml_function_coverage=1 00:05:19.688 --rc genhtml_legend=1 00:05:19.688 --rc geninfo_all_blocks=1 00:05:19.688 --rc geninfo_unexecuted_blocks=1 00:05:19.688 00:05:19.688 ' 00:05:19.688 20:58:20 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.688 --rc genhtml_branch_coverage=1 00:05:19.688 --rc genhtml_function_coverage=1 00:05:19.688 --rc genhtml_legend=1 00:05:19.688 --rc geninfo_all_blocks=1 00:05:19.688 --rc geninfo_unexecuted_blocks=1 00:05:19.688 00:05:19.688 ' 00:05:19.688 20:58:20 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.688 --rc genhtml_branch_coverage=1 00:05:19.688 --rc genhtml_function_coverage=1 00:05:19.688 --rc genhtml_legend=1 00:05:19.688 --rc geninfo_all_blocks=1 00:05:19.688 --rc geninfo_unexecuted_blocks=1 00:05:19.688 00:05:19.688 ' 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.688 20:58:20 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.688 20:58:20 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.688 20:58:20 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.688 20:58:20 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.688 20:58:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.688 20:58:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.688 20:58:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.688 20:58:20 json_config -- paths/export.sh@5 -- # export PATH 00:05:19.688 20:58:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@51 -- # : 0 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.688 20:58:20 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:19.688 INFO: JSON configuration test init 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:19.688 20:58:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.688 20:58:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:19.688 20:58:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.688 20:58:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.688 20:58:20 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:19.688 20:58:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:19.688 20:58:20 json_config -- json_config/common.sh@10 -- # shift 00:05:19.688 20:58:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.688 20:58:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.688 20:58:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.688 20:58:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.688 20:58:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.688 20:58:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1846192 00:05:19.688 20:58:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.688 Waiting for target to run... 00:05:19.688 20:58:20 json_config -- json_config/common.sh@25 -- # waitforlisten 1846192 /var/tmp/spdk_tgt.sock 00:05:19.688 20:58:20 json_config -- common/autotest_common.sh@835 -- # '[' -z 1846192 ']' 00:05:19.688 20:58:20 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.688 20:58:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:19.688 20:58:20 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.688 20:58:20 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.688 20:58:20 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.688 20:58:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.688 [2024-12-05 20:58:20.978931] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:19.688 [2024-12-05 20:58:20.978991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1846192 ] 00:05:19.949 [2024-12-05 20:58:21.277921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.949 [2024-12-05 20:58:21.307638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.518 20:58:21 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.518 20:58:21 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:20.518 20:58:21 json_config -- json_config/common.sh@26 -- # echo '' 00:05:20.518 00:05:20.518 20:58:21 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:20.518 20:58:21 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:20.518 20:58:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.518 20:58:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.518 20:58:21 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:20.518 20:58:21 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:20.518 20:58:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.518 20:58:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.518 20:58:21 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:20.518 20:58:21 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:20.518 20:58:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:21.106 20:58:22 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:21.106 20:58:22 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:21.106 20:58:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.106 20:58:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.106 20:58:22 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:21.106 20:58:22 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:21.106 20:58:22 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:21.106 20:58:22 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:21.106 20:58:22 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:21.106 20:58:22 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:21.106 20:58:22 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:21.106 20:58:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@54 -- # sort 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:21.367 20:58:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.367 20:58:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:21.367 20:58:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.367 20:58:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:21.367 20:58:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:21.367 20:58:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:21.367 MallocForNvmf0 00:05:21.626 20:58:22 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:21.626 20:58:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:21.626 MallocForNvmf1 00:05:21.626 20:58:22 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:21.626 20:58:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:21.886 [2024-12-05 20:58:23.151217] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.886 20:58:23 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:21.886 20:58:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:22.145 20:58:23 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:22.146 20:58:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:22.146 20:58:23 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:22.146 20:58:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:22.405 20:58:23 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:22.405 20:58:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:22.665 [2024-12-05 20:58:23.865509] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:22.665 20:58:23 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:22.665 20:58:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:22.665 20:58:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.665 20:58:23 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:22.665 20:58:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:22.665 20:58:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.665 20:58:23 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:22.665 20:58:23 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:22.665 20:58:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:22.927 MallocBdevForConfigChangeCheck 00:05:22.927 20:58:24 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:22.927 20:58:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:22.927 20:58:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.927 20:58:24 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:22.927 20:58:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.187 20:58:24 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:23.187 INFO: shutting down applications... 00:05:23.187 20:58:24 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:23.187 20:58:24 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:23.187 20:58:24 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:23.187 20:58:24 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:23.761 Calling clear_iscsi_subsystem 00:05:23.761 Calling clear_nvmf_subsystem 00:05:23.761 Calling clear_nbd_subsystem 00:05:23.761 Calling clear_ublk_subsystem 00:05:23.761 Calling clear_vhost_blk_subsystem 00:05:23.761 Calling clear_vhost_scsi_subsystem 00:05:23.761 Calling clear_bdev_subsystem 00:05:23.761 20:58:24 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:23.761 20:58:24 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:23.761 20:58:24 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:23.761 20:58:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.761 20:58:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:23.761 20:58:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:24.023 20:58:25 json_config -- json_config/json_config.sh@352 -- # break 00:05:24.023 20:58:25 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:24.023 20:58:25 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:24.023 20:58:25 json_config -- json_config/common.sh@31 -- # local app=target 00:05:24.023 20:58:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:24.023 20:58:25 json_config -- json_config/common.sh@35 -- # [[ -n 1846192 ]] 00:05:24.023 20:58:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1846192 00:05:24.023 20:58:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:24.023 20:58:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.023 20:58:25 json_config -- json_config/common.sh@41 -- # kill -0 1846192 00:05:24.023 20:58:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:24.602 20:58:25 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:24.602 20:58:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.602 20:58:25 json_config -- json_config/common.sh@41 -- # kill -0 1846192 00:05:24.602 20:58:25 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:24.602 20:58:25 json_config -- json_config/common.sh@43 -- # break 00:05:24.602 20:58:25 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:24.602 20:58:25 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:24.602 SPDK target shutdown done 00:05:24.602 20:58:25 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:24.602 INFO: relaunching applications... 00:05:24.602 20:58:25 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.602 20:58:25 json_config -- json_config/common.sh@9 -- # local app=target 00:05:24.602 20:58:25 json_config -- json_config/common.sh@10 -- # shift 00:05:24.602 20:58:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.602 20:58:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.602 20:58:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.602 20:58:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.602 20:58:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.602 20:58:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1847329 00:05:24.602 20:58:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.602 Waiting for target to run... 00:05:24.602 20:58:25 json_config -- json_config/common.sh@25 -- # waitforlisten 1847329 /var/tmp/spdk_tgt.sock 00:05:24.602 20:58:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.602 20:58:25 json_config -- common/autotest_common.sh@835 -- # '[' -z 1847329 ']' 00:05:24.602 20:58:25 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.602 20:58:25 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.602 20:58:25 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.602 20:58:25 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.602 20:58:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.602 [2024-12-05 20:58:25.844720] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:24.602 [2024-12-05 20:58:25.844778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1847329 ] 00:05:24.900 [2024-12-05 20:58:26.140389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.900 [2024-12-05 20:58:26.169953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.513 [2024-12-05 20:58:26.692061] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:25.513 [2024-12-05 20:58:26.724441] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:25.513 20:58:26 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.513 20:58:26 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:25.513 20:58:26 json_config -- json_config/common.sh@26 -- # echo '' 00:05:25.513 00:05:25.513 20:58:26 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:25.513 20:58:26 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:25.513 INFO: Checking if target configuration is the same... 00:05:25.513 20:58:26 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.513 20:58:26 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:25.513 20:58:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.513 + '[' 2 -ne 2 ']' 00:05:25.513 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:25.513 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:25.513 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:25.513 +++ basename /dev/fd/62 00:05:25.513 ++ mktemp /tmp/62.XXX 00:05:25.513 + tmp_file_1=/tmp/62.Sxw 00:05:25.513 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:25.513 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:25.513 + tmp_file_2=/tmp/spdk_tgt_config.json.nQP 00:05:25.513 + ret=0 00:05:25.513 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.774 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:25.774 + diff -u /tmp/62.Sxw /tmp/spdk_tgt_config.json.nQP 00:05:25.774 + echo 'INFO: JSON config files are the same' 00:05:25.774 INFO: JSON config files are the same 00:05:25.774 + rm /tmp/62.Sxw /tmp/spdk_tgt_config.json.nQP 00:05:25.774 + exit 0 00:05:25.774 20:58:27 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:25.774 20:58:27 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:25.774 INFO: changing configuration and checking if this can be detected... 00:05:25.774 20:58:27 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:25.774 20:58:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:26.033 20:58:27 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.034 20:58:27 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:26.034 20:58:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:26.034 + '[' 2 -ne 2 ']' 00:05:26.034 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:26.034 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:26.034 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:26.034 +++ basename /dev/fd/62 00:05:26.034 ++ mktemp /tmp/62.XXX 00:05:26.034 + tmp_file_1=/tmp/62.i9n 00:05:26.034 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.034 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:26.034 + tmp_file_2=/tmp/spdk_tgt_config.json.Yeb 00:05:26.034 + ret=0 00:05:26.034 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:26.293 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:26.293 + diff -u /tmp/62.i9n /tmp/spdk_tgt_config.json.Yeb 00:05:26.293 + ret=1 00:05:26.293 + echo '=== Start of file: /tmp/62.i9n ===' 00:05:26.293 + cat /tmp/62.i9n 00:05:26.293 + echo '=== End of file: /tmp/62.i9n ===' 00:05:26.293 + echo '' 00:05:26.293 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Yeb ===' 00:05:26.293 + cat /tmp/spdk_tgt_config.json.Yeb 00:05:26.293 + echo '=== End of file: /tmp/spdk_tgt_config.json.Yeb ===' 00:05:26.293 + echo '' 00:05:26.293 + rm /tmp/62.i9n /tmp/spdk_tgt_config.json.Yeb 00:05:26.293 + exit 1 00:05:26.293 20:58:27 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:26.293 INFO: configuration change detected. 00:05:26.294 20:58:27 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:26.294 20:58:27 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:26.294 20:58:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.294 20:58:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.294 20:58:27 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:26.294 20:58:27 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:26.294 20:58:27 json_config -- json_config/json_config.sh@324 -- # [[ -n 1847329 ]] 00:05:26.294 20:58:27 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:26.294 20:58:27 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:26.294 20:58:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.294 20:58:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.554 20:58:27 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:26.554 20:58:27 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:26.554 20:58:27 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:26.554 20:58:27 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:26.554 20:58:27 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:26.554 20:58:27 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:26.554 20:58:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.554 20:58:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.554 20:58:27 json_config -- json_config/json_config.sh@330 -- # killprocess 1847329 00:05:26.554 20:58:27 json_config -- common/autotest_common.sh@954 -- # '[' -z 1847329 ']' 00:05:26.554 20:58:27 json_config -- common/autotest_common.sh@958 -- # kill -0 1847329 00:05:26.554 20:58:27 json_config -- common/autotest_common.sh@959 -- # uname 00:05:26.554 20:58:27 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.554 20:58:27 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1847329 00:05:26.554 20:58:27 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.554 20:58:27 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.554 20:58:27 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1847329' 00:05:26.554 killing process with pid 1847329 00:05:26.554 20:58:27 json_config -- common/autotest_common.sh@973 -- # kill 1847329 00:05:26.554 20:58:27 json_config -- common/autotest_common.sh@978 -- # wait 1847329 00:05:26.815 20:58:28 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.815 20:58:28 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:26.816 20:58:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.816 20:58:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.816 20:58:28 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:26.816 20:58:28 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:26.816 INFO: Success 00:05:26.816 00:05:26.816 real 0m7.472s 00:05:26.816 user 0m9.072s 00:05:26.816 sys 0m1.987s 00:05:26.816 20:58:28 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.816 20:58:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.816 ************************************ 00:05:26.816 END TEST json_config 00:05:26.816 ************************************ 00:05:26.816 20:58:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.816 20:58:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.816 20:58:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.816 20:58:28 -- common/autotest_common.sh@10 -- # set +x 00:05:26.816 ************************************ 00:05:26.816 START TEST json_config_extra_key 00:05:26.816 ************************************ 00:05:26.816 20:58:28 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:27.078 20:58:28 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.078 20:58:28 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:27.078 20:58:28 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.078 20:58:28 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:27.078 20:58:28 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.078 20:58:28 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.078 --rc genhtml_branch_coverage=1 00:05:27.078 --rc genhtml_function_coverage=1 00:05:27.078 --rc genhtml_legend=1 00:05:27.078 --rc geninfo_all_blocks=1 00:05:27.078 --rc geninfo_unexecuted_blocks=1 00:05:27.078 00:05:27.078 ' 00:05:27.078 20:58:28 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.078 --rc genhtml_branch_coverage=1 00:05:27.078 --rc genhtml_function_coverage=1 00:05:27.078 --rc genhtml_legend=1 00:05:27.078 --rc geninfo_all_blocks=1 00:05:27.078 --rc geninfo_unexecuted_blocks=1 00:05:27.078 00:05:27.078 ' 00:05:27.078 20:58:28 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.078 --rc genhtml_branch_coverage=1 00:05:27.078 --rc genhtml_function_coverage=1 00:05:27.078 --rc genhtml_legend=1 00:05:27.078 --rc geninfo_all_blocks=1 00:05:27.078 --rc geninfo_unexecuted_blocks=1 00:05:27.078 00:05:27.078 ' 00:05:27.078 20:58:28 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.078 --rc genhtml_branch_coverage=1 00:05:27.078 --rc genhtml_function_coverage=1 00:05:27.078 --rc genhtml_legend=1 00:05:27.078 --rc geninfo_all_blocks=1 00:05:27.078 --rc geninfo_unexecuted_blocks=1 00:05:27.078 00:05:27.078 ' 00:05:27.078 20:58:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.078 20:58:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.078 20:58:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.078 20:58:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.078 20:58:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.078 20:58:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:27.078 20:58:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:27.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:27.078 20:58:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:27.078 20:58:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:27.078 20:58:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:27.078 20:58:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:27.078 20:58:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:27.078 20:58:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:27.078 20:58:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:27.078 20:58:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:27.078 20:58:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:27.079 20:58:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:27.079 20:58:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:27.079 20:58:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:27.079 INFO: launching applications... 00:05:27.079 20:58:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:27.079 20:58:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:27.079 20:58:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:27.079 20:58:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:27.079 20:58:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:27.079 20:58:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:27.079 20:58:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.079 20:58:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.079 20:58:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1847822 00:05:27.079 20:58:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:27.079 Waiting for target to run... 00:05:27.079 20:58:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1847822 /var/tmp/spdk_tgt.sock 00:05:27.079 20:58:28 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1847822 ']' 00:05:27.079 20:58:28 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:27.079 20:58:28 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.079 20:58:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:27.079 20:58:28 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:27.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:27.079 20:58:28 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.079 20:58:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.339 [2024-12-05 20:58:28.524104] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:27.339 [2024-12-05 20:58:28.524186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1847822 ] 00:05:27.600 [2024-12-05 20:58:28.806683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.600 [2024-12-05 20:58:28.835691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.171 20:58:29 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.171 20:58:29 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:28.171 20:58:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:28.171 00:05:28.171 20:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:28.171 INFO: shutting down applications... 00:05:28.171 20:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:28.171 20:58:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:28.171 20:58:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:28.171 20:58:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1847822 ]] 00:05:28.171 20:58:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1847822 00:05:28.171 20:58:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:28.171 20:58:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.171 20:58:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1847822 00:05:28.171 20:58:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.431 20:58:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.431 20:58:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.431 20:58:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1847822 00:05:28.431 20:58:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:28.431 20:58:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:28.431 20:58:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:28.431 20:58:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:28.431 SPDK target shutdown done 00:05:28.431 20:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:28.431 Success 00:05:28.431 00:05:28.431 real 0m1.568s 00:05:28.431 user 0m1.217s 00:05:28.431 sys 0m0.380s 00:05:28.431 20:58:29 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.431 20:58:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:28.431 ************************************ 00:05:28.431 END TEST json_config_extra_key 00:05:28.431 ************************************ 00:05:28.431 20:58:29 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:28.431 20:58:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.431 20:58:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.431 20:58:29 -- common/autotest_common.sh@10 -- # set +x 00:05:28.692 ************************************ 00:05:28.692 START TEST alias_rpc 00:05:28.692 ************************************ 00:05:28.692 20:58:29 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:28.692 * Looking for test storage... 00:05:28.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:28.692 20:58:29 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.692 20:58:29 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.692 20:58:29 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.692 20:58:30 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.692 20:58:30 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:28.692 20:58:30 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.692 20:58:30 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.692 --rc genhtml_branch_coverage=1 00:05:28.692 --rc genhtml_function_coverage=1 00:05:28.692 --rc genhtml_legend=1 00:05:28.692 --rc geninfo_all_blocks=1 00:05:28.692 --rc geninfo_unexecuted_blocks=1 00:05:28.692 00:05:28.692 ' 00:05:28.692 20:58:30 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.692 --rc genhtml_branch_coverage=1 00:05:28.692 --rc genhtml_function_coverage=1 00:05:28.692 --rc genhtml_legend=1 00:05:28.692 --rc geninfo_all_blocks=1 00:05:28.692 --rc geninfo_unexecuted_blocks=1 00:05:28.692 00:05:28.692 ' 00:05:28.692 20:58:30 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.692 --rc genhtml_branch_coverage=1 00:05:28.692 --rc genhtml_function_coverage=1 00:05:28.692 --rc genhtml_legend=1 00:05:28.692 --rc geninfo_all_blocks=1 00:05:28.692 --rc geninfo_unexecuted_blocks=1 00:05:28.692 00:05:28.692 ' 00:05:28.692 20:58:30 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.692 --rc genhtml_branch_coverage=1 00:05:28.692 --rc genhtml_function_coverage=1 00:05:28.692 --rc genhtml_legend=1 00:05:28.692 --rc geninfo_all_blocks=1 00:05:28.692 --rc geninfo_unexecuted_blocks=1 00:05:28.692 00:05:28.692 ' 00:05:28.692 20:58:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:28.692 20:58:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1848202 00:05:28.692 20:58:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1848202 00:05:28.692 20:58:30 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1848202 ']' 00:05:28.692 20:58:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.692 20:58:30 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.692 20:58:30 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.692 20:58:30 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.692 20:58:30 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.692 20:58:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.952 [2024-12-05 20:58:30.148972] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:28.952 [2024-12-05 20:58:30.149043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848202 ] 00:05:28.952 [2024-12-05 20:58:30.233901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.952 [2024-12-05 20:58:30.274678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.522 20:58:30 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.522 20:58:30 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:29.522 20:58:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:29.781 20:58:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1848202 00:05:29.781 20:58:31 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1848202 ']' 00:05:29.781 20:58:31 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1848202 00:05:29.781 20:58:31 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:29.781 20:58:31 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.781 20:58:31 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1848202 00:05:30.040 20:58:31 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.040 20:58:31 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.040 20:58:31 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1848202' 00:05:30.040 killing process with pid 1848202 00:05:30.040 20:58:31 alias_rpc -- common/autotest_common.sh@973 -- # kill 1848202 00:05:30.040 20:58:31 alias_rpc -- common/autotest_common.sh@978 -- # wait 1848202 00:05:30.040 00:05:30.040 real 0m1.540s 00:05:30.040 user 0m1.719s 00:05:30.040 sys 0m0.407s 00:05:30.040 20:58:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.040 20:58:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.040 ************************************ 00:05:30.040 END TEST alias_rpc 00:05:30.040 ************************************ 00:05:30.040 20:58:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:30.040 20:58:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:30.040 20:58:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.040 20:58:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.040 20:58:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.300 ************************************ 00:05:30.300 START TEST spdkcli_tcp 00:05:30.300 ************************************ 00:05:30.300 20:58:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:30.300 * Looking for test storage... 00:05:30.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:30.300 20:58:31 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:30.300 20:58:31 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:30.300 20:58:31 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:30.300 20:58:31 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.300 20:58:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:30.300 20:58:31 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.300 20:58:31 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:30.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.300 --rc genhtml_branch_coverage=1 00:05:30.300 --rc genhtml_function_coverage=1 00:05:30.300 --rc genhtml_legend=1 00:05:30.300 --rc geninfo_all_blocks=1 00:05:30.300 --rc geninfo_unexecuted_blocks=1 00:05:30.300 00:05:30.300 ' 00:05:30.300 20:58:31 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:30.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.300 --rc genhtml_branch_coverage=1 00:05:30.300 --rc genhtml_function_coverage=1 00:05:30.300 --rc genhtml_legend=1 00:05:30.300 --rc geninfo_all_blocks=1 00:05:30.300 --rc geninfo_unexecuted_blocks=1 00:05:30.300 00:05:30.300 ' 00:05:30.300 20:58:31 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:30.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.301 --rc genhtml_branch_coverage=1 00:05:30.301 --rc genhtml_function_coverage=1 00:05:30.301 --rc genhtml_legend=1 00:05:30.301 --rc geninfo_all_blocks=1 00:05:30.301 --rc geninfo_unexecuted_blocks=1 00:05:30.301 00:05:30.301 ' 00:05:30.301 20:58:31 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:30.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.301 --rc genhtml_branch_coverage=1 00:05:30.301 --rc genhtml_function_coverage=1 00:05:30.301 --rc genhtml_legend=1 00:05:30.301 --rc geninfo_all_blocks=1 00:05:30.301 --rc geninfo_unexecuted_blocks=1 00:05:30.301 00:05:30.301 ' 00:05:30.301 20:58:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:30.301 20:58:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:30.301 20:58:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:30.301 20:58:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:30.301 20:58:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:30.301 20:58:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:30.301 20:58:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:30.301 20:58:31 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.301 20:58:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.301 20:58:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1848594 00:05:30.301 20:58:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1848594 00:05:30.301 20:58:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:30.301 20:58:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1848594 ']' 00:05:30.301 20:58:31 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.301 20:58:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.301 20:58:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.301 20:58:31 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.301 20:58:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.561 [2024-12-05 20:58:31.778701] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:30.561 [2024-12-05 20:58:31.778778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848594 ] 00:05:30.561 [2024-12-05 20:58:31.866326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.561 [2024-12-05 20:58:31.912188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.561 [2024-12-05 20:58:31.912278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.501 20:58:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.501 20:58:32 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:31.501 20:58:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1848919 00:05:31.501 20:58:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:31.501 20:58:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:31.501 [ 00:05:31.501 "bdev_malloc_delete", 00:05:31.501 "bdev_malloc_create", 00:05:31.501 "bdev_null_resize", 00:05:31.501 "bdev_null_delete", 00:05:31.501 "bdev_null_create", 00:05:31.501 "bdev_nvme_cuse_unregister", 00:05:31.501 "bdev_nvme_cuse_register", 00:05:31.501 "bdev_opal_new_user", 00:05:31.501 "bdev_opal_set_lock_state", 00:05:31.501 "bdev_opal_delete", 00:05:31.501 "bdev_opal_get_info", 00:05:31.501 "bdev_opal_create", 00:05:31.501 "bdev_nvme_opal_revert", 00:05:31.501 "bdev_nvme_opal_init", 00:05:31.501 "bdev_nvme_send_cmd", 00:05:31.501 "bdev_nvme_set_keys", 00:05:31.501 "bdev_nvme_get_path_iostat", 00:05:31.501 "bdev_nvme_get_mdns_discovery_info", 00:05:31.501 "bdev_nvme_stop_mdns_discovery", 00:05:31.501 "bdev_nvme_start_mdns_discovery", 00:05:31.501 "bdev_nvme_set_multipath_policy", 00:05:31.501 "bdev_nvme_set_preferred_path", 00:05:31.501 "bdev_nvme_get_io_paths", 00:05:31.501 "bdev_nvme_remove_error_injection", 00:05:31.501 "bdev_nvme_add_error_injection", 00:05:31.501 "bdev_nvme_get_discovery_info", 00:05:31.501 "bdev_nvme_stop_discovery", 00:05:31.501 "bdev_nvme_start_discovery", 00:05:31.501 "bdev_nvme_get_controller_health_info", 00:05:31.501 "bdev_nvme_disable_controller", 00:05:31.501 "bdev_nvme_enable_controller", 00:05:31.501 "bdev_nvme_reset_controller", 00:05:31.501 "bdev_nvme_get_transport_statistics", 00:05:31.501 "bdev_nvme_apply_firmware", 00:05:31.501 "bdev_nvme_detach_controller", 00:05:31.501 "bdev_nvme_get_controllers", 00:05:31.501 "bdev_nvme_attach_controller", 00:05:31.501 "bdev_nvme_set_hotplug", 00:05:31.501 "bdev_nvme_set_options", 00:05:31.501 "bdev_passthru_delete", 00:05:31.501 "bdev_passthru_create", 00:05:31.501 "bdev_lvol_set_parent_bdev", 00:05:31.501 "bdev_lvol_set_parent", 00:05:31.501 "bdev_lvol_check_shallow_copy", 00:05:31.501 "bdev_lvol_start_shallow_copy", 00:05:31.501 "bdev_lvol_grow_lvstore", 00:05:31.501 "bdev_lvol_get_lvols", 00:05:31.501 "bdev_lvol_get_lvstores", 00:05:31.501 "bdev_lvol_delete", 00:05:31.501 "bdev_lvol_set_read_only", 00:05:31.501 "bdev_lvol_resize", 00:05:31.501 "bdev_lvol_decouple_parent", 00:05:31.501 "bdev_lvol_inflate", 00:05:31.501 "bdev_lvol_rename", 00:05:31.501 "bdev_lvol_clone_bdev", 00:05:31.501 "bdev_lvol_clone", 00:05:31.501 "bdev_lvol_snapshot", 00:05:31.501 "bdev_lvol_create", 00:05:31.501 "bdev_lvol_delete_lvstore", 00:05:31.501 "bdev_lvol_rename_lvstore", 00:05:31.501 "bdev_lvol_create_lvstore", 00:05:31.501 "bdev_raid_set_options", 00:05:31.501 "bdev_raid_remove_base_bdev", 00:05:31.501 "bdev_raid_add_base_bdev", 00:05:31.501 "bdev_raid_delete", 00:05:31.501 "bdev_raid_create", 00:05:31.501 "bdev_raid_get_bdevs", 00:05:31.501 "bdev_error_inject_error", 00:05:31.501 "bdev_error_delete", 00:05:31.501 "bdev_error_create", 00:05:31.501 "bdev_split_delete", 00:05:31.501 "bdev_split_create", 00:05:31.501 "bdev_delay_delete", 00:05:31.501 "bdev_delay_create", 00:05:31.501 "bdev_delay_update_latency", 00:05:31.501 "bdev_zone_block_delete", 00:05:31.501 "bdev_zone_block_create", 00:05:31.501 "blobfs_create", 00:05:31.501 "blobfs_detect", 00:05:31.501 "blobfs_set_cache_size", 00:05:31.501 "bdev_aio_delete", 00:05:31.501 "bdev_aio_rescan", 00:05:31.501 "bdev_aio_create", 00:05:31.501 "bdev_ftl_set_property", 00:05:31.501 "bdev_ftl_get_properties", 00:05:31.501 "bdev_ftl_get_stats", 00:05:31.501 "bdev_ftl_unmap", 00:05:31.501 "bdev_ftl_unload", 00:05:31.501 "bdev_ftl_delete", 00:05:31.501 "bdev_ftl_load", 00:05:31.501 "bdev_ftl_create", 00:05:31.501 "bdev_virtio_attach_controller", 00:05:31.501 "bdev_virtio_scsi_get_devices", 00:05:31.501 "bdev_virtio_detach_controller", 00:05:31.501 "bdev_virtio_blk_set_hotplug", 00:05:31.501 "bdev_iscsi_delete", 00:05:31.501 "bdev_iscsi_create", 00:05:31.501 "bdev_iscsi_set_options", 00:05:31.501 "accel_error_inject_error", 00:05:31.501 "ioat_scan_accel_module", 00:05:31.501 "dsa_scan_accel_module", 00:05:31.501 "iaa_scan_accel_module", 00:05:31.501 "vfu_virtio_create_fs_endpoint", 00:05:31.501 "vfu_virtio_create_scsi_endpoint", 00:05:31.501 "vfu_virtio_scsi_remove_target", 00:05:31.501 "vfu_virtio_scsi_add_target", 00:05:31.501 "vfu_virtio_create_blk_endpoint", 00:05:31.501 "vfu_virtio_delete_endpoint", 00:05:31.501 "keyring_file_remove_key", 00:05:31.501 "keyring_file_add_key", 00:05:31.501 "keyring_linux_set_options", 00:05:31.501 "fsdev_aio_delete", 00:05:31.501 "fsdev_aio_create", 00:05:31.501 "iscsi_get_histogram", 00:05:31.501 "iscsi_enable_histogram", 00:05:31.501 "iscsi_set_options", 00:05:31.501 "iscsi_get_auth_groups", 00:05:31.501 "iscsi_auth_group_remove_secret", 00:05:31.501 "iscsi_auth_group_add_secret", 00:05:31.501 "iscsi_delete_auth_group", 00:05:31.501 "iscsi_create_auth_group", 00:05:31.501 "iscsi_set_discovery_auth", 00:05:31.501 "iscsi_get_options", 00:05:31.501 "iscsi_target_node_request_logout", 00:05:31.501 "iscsi_target_node_set_redirect", 00:05:31.501 "iscsi_target_node_set_auth", 00:05:31.501 "iscsi_target_node_add_lun", 00:05:31.501 "iscsi_get_stats", 00:05:31.501 "iscsi_get_connections", 00:05:31.501 "iscsi_portal_group_set_auth", 00:05:31.501 "iscsi_start_portal_group", 00:05:31.501 "iscsi_delete_portal_group", 00:05:31.501 "iscsi_create_portal_group", 00:05:31.501 "iscsi_get_portal_groups", 00:05:31.501 "iscsi_delete_target_node", 00:05:31.501 "iscsi_target_node_remove_pg_ig_maps", 00:05:31.501 "iscsi_target_node_add_pg_ig_maps", 00:05:31.501 "iscsi_create_target_node", 00:05:31.501 "iscsi_get_target_nodes", 00:05:31.501 "iscsi_delete_initiator_group", 00:05:31.501 "iscsi_initiator_group_remove_initiators", 00:05:31.502 "iscsi_initiator_group_add_initiators", 00:05:31.502 "iscsi_create_initiator_group", 00:05:31.502 "iscsi_get_initiator_groups", 00:05:31.502 "nvmf_set_crdt", 00:05:31.502 "nvmf_set_config", 00:05:31.502 "nvmf_set_max_subsystems", 00:05:31.502 "nvmf_stop_mdns_prr", 00:05:31.502 "nvmf_publish_mdns_prr", 00:05:31.502 "nvmf_subsystem_get_listeners", 00:05:31.502 "nvmf_subsystem_get_qpairs", 00:05:31.502 "nvmf_subsystem_get_controllers", 00:05:31.502 "nvmf_get_stats", 00:05:31.502 "nvmf_get_transports", 00:05:31.502 "nvmf_create_transport", 00:05:31.502 "nvmf_get_targets", 00:05:31.502 "nvmf_delete_target", 00:05:31.502 "nvmf_create_target", 00:05:31.502 "nvmf_subsystem_allow_any_host", 00:05:31.502 "nvmf_subsystem_set_keys", 00:05:31.502 "nvmf_subsystem_remove_host", 00:05:31.502 "nvmf_subsystem_add_host", 00:05:31.502 "nvmf_ns_remove_host", 00:05:31.502 "nvmf_ns_add_host", 00:05:31.502 "nvmf_subsystem_remove_ns", 00:05:31.502 "nvmf_subsystem_set_ns_ana_group", 00:05:31.502 "nvmf_subsystem_add_ns", 00:05:31.502 "nvmf_subsystem_listener_set_ana_state", 00:05:31.502 "nvmf_discovery_get_referrals", 00:05:31.502 "nvmf_discovery_remove_referral", 00:05:31.502 "nvmf_discovery_add_referral", 00:05:31.502 "nvmf_subsystem_remove_listener", 00:05:31.502 "nvmf_subsystem_add_listener", 00:05:31.502 "nvmf_delete_subsystem", 00:05:31.502 "nvmf_create_subsystem", 00:05:31.502 "nvmf_get_subsystems", 00:05:31.502 "env_dpdk_get_mem_stats", 00:05:31.502 "nbd_get_disks", 00:05:31.502 "nbd_stop_disk", 00:05:31.502 "nbd_start_disk", 00:05:31.502 "ublk_recover_disk", 00:05:31.502 "ublk_get_disks", 00:05:31.502 "ublk_stop_disk", 00:05:31.502 "ublk_start_disk", 00:05:31.502 "ublk_destroy_target", 00:05:31.502 "ublk_create_target", 00:05:31.502 "virtio_blk_create_transport", 00:05:31.502 "virtio_blk_get_transports", 00:05:31.502 "vhost_controller_set_coalescing", 00:05:31.502 "vhost_get_controllers", 00:05:31.502 "vhost_delete_controller", 00:05:31.502 "vhost_create_blk_controller", 00:05:31.502 "vhost_scsi_controller_remove_target", 00:05:31.502 "vhost_scsi_controller_add_target", 00:05:31.502 "vhost_start_scsi_controller", 00:05:31.502 "vhost_create_scsi_controller", 00:05:31.502 "thread_set_cpumask", 00:05:31.502 "scheduler_set_options", 00:05:31.502 "framework_get_governor", 00:05:31.502 "framework_get_scheduler", 00:05:31.502 "framework_set_scheduler", 00:05:31.502 "framework_get_reactors", 00:05:31.502 "thread_get_io_channels", 00:05:31.502 "thread_get_pollers", 00:05:31.502 "thread_get_stats", 00:05:31.502 "framework_monitor_context_switch", 00:05:31.502 "spdk_kill_instance", 00:05:31.502 "log_enable_timestamps", 00:05:31.502 "log_get_flags", 00:05:31.502 "log_clear_flag", 00:05:31.502 "log_set_flag", 00:05:31.502 "log_get_level", 00:05:31.502 "log_set_level", 00:05:31.502 "log_get_print_level", 00:05:31.502 "log_set_print_level", 00:05:31.502 "framework_enable_cpumask_locks", 00:05:31.502 "framework_disable_cpumask_locks", 00:05:31.502 "framework_wait_init", 00:05:31.502 "framework_start_init", 00:05:31.502 "scsi_get_devices", 00:05:31.502 "bdev_get_histogram", 00:05:31.502 "bdev_enable_histogram", 00:05:31.502 "bdev_set_qos_limit", 00:05:31.502 "bdev_set_qd_sampling_period", 00:05:31.502 "bdev_get_bdevs", 00:05:31.502 "bdev_reset_iostat", 00:05:31.502 "bdev_get_iostat", 00:05:31.502 "bdev_examine", 00:05:31.502 "bdev_wait_for_examine", 00:05:31.502 "bdev_set_options", 00:05:31.502 "accel_get_stats", 00:05:31.502 "accel_set_options", 00:05:31.502 "accel_set_driver", 00:05:31.502 "accel_crypto_key_destroy", 00:05:31.502 "accel_crypto_keys_get", 00:05:31.502 "accel_crypto_key_create", 00:05:31.502 "accel_assign_opc", 00:05:31.502 "accel_get_module_info", 00:05:31.502 "accel_get_opc_assignments", 00:05:31.502 "vmd_rescan", 00:05:31.502 "vmd_remove_device", 00:05:31.502 "vmd_enable", 00:05:31.502 "sock_get_default_impl", 00:05:31.502 "sock_set_default_impl", 00:05:31.502 "sock_impl_set_options", 00:05:31.502 "sock_impl_get_options", 00:05:31.502 "iobuf_get_stats", 00:05:31.502 "iobuf_set_options", 00:05:31.502 "keyring_get_keys", 00:05:31.502 "vfu_tgt_set_base_path", 00:05:31.502 "framework_get_pci_devices", 00:05:31.502 "framework_get_config", 00:05:31.502 "framework_get_subsystems", 00:05:31.502 "fsdev_set_opts", 00:05:31.502 "fsdev_get_opts", 00:05:31.502 "trace_get_info", 00:05:31.502 "trace_get_tpoint_group_mask", 00:05:31.502 "trace_disable_tpoint_group", 00:05:31.502 "trace_enable_tpoint_group", 00:05:31.502 "trace_clear_tpoint_mask", 00:05:31.502 "trace_set_tpoint_mask", 00:05:31.502 "notify_get_notifications", 00:05:31.502 "notify_get_types", 00:05:31.502 "spdk_get_version", 00:05:31.502 "rpc_get_methods" 00:05:31.502 ] 00:05:31.502 20:58:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:31.502 20:58:32 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:31.502 20:58:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.502 20:58:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:31.502 20:58:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1848594 00:05:31.502 20:58:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1848594 ']' 00:05:31.502 20:58:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1848594 00:05:31.502 20:58:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:31.502 20:58:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.502 20:58:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1848594 00:05:31.502 20:58:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.502 20:58:32 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.502 20:58:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1848594' 00:05:31.502 killing process with pid 1848594 00:05:31.502 20:58:32 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1848594 00:05:31.502 20:58:32 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1848594 00:05:31.763 00:05:31.763 real 0m1.555s 00:05:31.763 user 0m2.821s 00:05:31.763 sys 0m0.465s 00:05:31.763 20:58:33 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.763 20:58:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.763 ************************************ 00:05:31.763 END TEST spdkcli_tcp 00:05:31.763 ************************************ 00:05:31.763 20:58:33 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:31.763 20:58:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.763 20:58:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.763 20:58:33 -- common/autotest_common.sh@10 -- # set +x 00:05:31.763 ************************************ 00:05:31.763 START TEST dpdk_mem_utility 00:05:31.763 ************************************ 00:05:31.763 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:32.024 * Looking for test storage... 00:05:32.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:32.024 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.024 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.024 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:32.024 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.024 20:58:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:32.024 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.024 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:32.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.024 --rc genhtml_branch_coverage=1 00:05:32.024 --rc genhtml_function_coverage=1 00:05:32.024 --rc genhtml_legend=1 00:05:32.024 --rc geninfo_all_blocks=1 00:05:32.024 --rc geninfo_unexecuted_blocks=1 00:05:32.024 00:05:32.024 ' 00:05:32.024 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:32.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.024 --rc genhtml_branch_coverage=1 00:05:32.024 --rc genhtml_function_coverage=1 00:05:32.024 --rc genhtml_legend=1 00:05:32.024 --rc geninfo_all_blocks=1 00:05:32.024 --rc geninfo_unexecuted_blocks=1 00:05:32.024 00:05:32.024 ' 00:05:32.024 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:32.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.024 --rc genhtml_branch_coverage=1 00:05:32.024 --rc genhtml_function_coverage=1 00:05:32.024 --rc genhtml_legend=1 00:05:32.024 --rc geninfo_all_blocks=1 00:05:32.024 --rc geninfo_unexecuted_blocks=1 00:05:32.024 00:05:32.024 ' 00:05:32.024 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:32.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.024 --rc genhtml_branch_coverage=1 00:05:32.024 --rc genhtml_function_coverage=1 00:05:32.024 --rc genhtml_legend=1 00:05:32.024 --rc geninfo_all_blocks=1 00:05:32.024 --rc geninfo_unexecuted_blocks=1 00:05:32.024 00:05:32.024 ' 00:05:32.024 20:58:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:32.024 20:58:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1849006 00:05:32.024 20:58:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1849006 00:05:32.024 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1849006 ']' 00:05:32.024 20:58:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.024 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.024 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.024 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.024 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.024 20:58:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.024 [2024-12-05 20:58:33.396029] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:32.025 [2024-12-05 20:58:33.396098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1849006 ] 00:05:32.284 [2024-12-05 20:58:33.480341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.284 [2024-12-05 20:58:33.523573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.856 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.856 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:32.856 20:58:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:32.856 20:58:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:32.856 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.856 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.856 { 00:05:32.856 "filename": "/tmp/spdk_mem_dump.txt" 00:05:32.856 } 00:05:32.856 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.856 20:58:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:32.856 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:32.856 1 heaps totaling size 818.000000 MiB 00:05:32.856 size: 818.000000 MiB heap id: 0 00:05:32.856 end heaps---------- 00:05:32.856 9 mempools totaling size 603.782043 MiB 00:05:32.856 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:32.856 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:32.856 size: 100.555481 MiB name: bdev_io_1849006 00:05:32.856 size: 50.003479 MiB name: msgpool_1849006 00:05:32.856 size: 36.509338 MiB name: fsdev_io_1849006 00:05:32.856 size: 21.763794 MiB name: PDU_Pool 00:05:32.856 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:32.856 size: 4.133484 MiB name: evtpool_1849006 00:05:32.856 size: 0.026123 MiB name: Session_Pool 00:05:32.856 end mempools------- 00:05:32.856 6 memzones totaling size 4.142822 MiB 00:05:32.856 size: 1.000366 MiB name: RG_ring_0_1849006 00:05:32.856 size: 1.000366 MiB name: RG_ring_1_1849006 00:05:32.856 size: 1.000366 MiB name: RG_ring_4_1849006 00:05:32.856 size: 1.000366 MiB name: RG_ring_5_1849006 00:05:32.856 size: 0.125366 MiB name: RG_ring_2_1849006 00:05:32.856 size: 0.015991 MiB name: RG_ring_3_1849006 00:05:32.856 end memzones------- 00:05:32.856 20:58:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:32.856 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:32.856 list of free elements. size: 10.852478 MiB 00:05:32.856 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:32.856 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:32.856 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:32.856 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:32.856 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:32.856 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:32.856 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:32.856 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:32.856 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:32.856 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:32.856 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:32.856 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:32.856 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:32.856 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:32.856 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:32.856 list of standard malloc elements. size: 199.218628 MiB 00:05:32.856 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:32.856 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:32.856 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:32.856 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:32.856 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:32.856 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:32.856 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:32.856 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:32.856 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:32.856 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:32.856 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:32.856 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:32.856 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:32.856 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:32.856 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:32.856 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:32.856 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:32.856 list of memzone associated elements. size: 607.928894 MiB 00:05:32.856 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:32.856 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:32.856 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:32.856 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:32.856 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:32.857 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1849006_0 00:05:32.857 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:32.857 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1849006_0 00:05:32.857 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:32.857 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1849006_0 00:05:32.857 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:32.857 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:32.857 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:32.857 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:32.857 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:32.857 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1849006_0 00:05:32.857 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:32.857 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1849006 00:05:32.857 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:32.857 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1849006 00:05:32.857 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:32.857 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:32.857 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:32.857 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:32.857 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:32.857 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:32.857 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:32.857 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:32.857 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:32.857 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1849006 00:05:32.857 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:32.857 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1849006 00:05:32.857 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:32.857 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1849006 00:05:32.857 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:32.857 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1849006 00:05:32.857 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:32.857 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1849006 00:05:32.857 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:32.857 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1849006 00:05:32.857 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:32.857 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:32.857 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:32.857 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:32.857 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:32.857 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:32.857 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:32.857 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1849006 00:05:32.857 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:32.857 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1849006 00:05:32.857 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:32.857 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:32.857 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:32.857 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:32.857 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:32.857 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1849006 00:05:32.857 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:32.857 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:32.857 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:32.857 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1849006 00:05:32.857 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:32.857 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1849006 00:05:32.857 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:32.857 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1849006 00:05:32.857 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:32.857 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:32.857 20:58:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:32.857 20:58:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1849006 00:05:32.857 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1849006 ']' 00:05:32.857 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1849006 00:05:32.857 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:32.857 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.857 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1849006 00:05:33.118 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.118 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.118 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1849006' 00:05:33.118 killing process with pid 1849006 00:05:33.118 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1849006 00:05:33.118 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1849006 00:05:33.118 00:05:33.118 real 0m1.410s 00:05:33.118 user 0m1.481s 00:05:33.118 sys 0m0.412s 00:05:33.118 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.118 20:58:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:33.118 ************************************ 00:05:33.118 END TEST dpdk_mem_utility 00:05:33.118 ************************************ 00:05:33.379 20:58:34 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:33.380 20:58:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.380 20:58:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.380 20:58:34 -- common/autotest_common.sh@10 -- # set +x 00:05:33.380 ************************************ 00:05:33.380 START TEST event 00:05:33.380 ************************************ 00:05:33.380 20:58:34 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:33.380 * Looking for test storage... 00:05:33.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:33.380 20:58:34 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.380 20:58:34 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.380 20:58:34 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.380 20:58:34 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.380 20:58:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.380 20:58:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.380 20:58:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.380 20:58:34 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.380 20:58:34 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.380 20:58:34 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.380 20:58:34 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.380 20:58:34 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.380 20:58:34 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.380 20:58:34 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.380 20:58:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.380 20:58:34 event -- scripts/common.sh@344 -- # case "$op" in 00:05:33.380 20:58:34 event -- scripts/common.sh@345 -- # : 1 00:05:33.380 20:58:34 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.380 20:58:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.380 20:58:34 event -- scripts/common.sh@365 -- # decimal 1 00:05:33.380 20:58:34 event -- scripts/common.sh@353 -- # local d=1 00:05:33.380 20:58:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.380 20:58:34 event -- scripts/common.sh@355 -- # echo 1 00:05:33.641 20:58:34 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.641 20:58:34 event -- scripts/common.sh@366 -- # decimal 2 00:05:33.641 20:58:34 event -- scripts/common.sh@353 -- # local d=2 00:05:33.641 20:58:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.641 20:58:34 event -- scripts/common.sh@355 -- # echo 2 00:05:33.641 20:58:34 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.641 20:58:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.641 20:58:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.641 20:58:34 event -- scripts/common.sh@368 -- # return 0 00:05:33.641 20:58:34 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.641 20:58:34 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.641 --rc genhtml_branch_coverage=1 00:05:33.641 --rc genhtml_function_coverage=1 00:05:33.641 --rc genhtml_legend=1 00:05:33.641 --rc geninfo_all_blocks=1 00:05:33.641 --rc geninfo_unexecuted_blocks=1 00:05:33.641 00:05:33.641 ' 00:05:33.641 20:58:34 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.641 --rc genhtml_branch_coverage=1 00:05:33.641 --rc genhtml_function_coverage=1 00:05:33.641 --rc genhtml_legend=1 00:05:33.641 --rc geninfo_all_blocks=1 00:05:33.641 --rc geninfo_unexecuted_blocks=1 00:05:33.641 00:05:33.641 ' 00:05:33.641 20:58:34 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.641 --rc genhtml_branch_coverage=1 00:05:33.641 --rc genhtml_function_coverage=1 00:05:33.641 --rc genhtml_legend=1 00:05:33.641 --rc geninfo_all_blocks=1 00:05:33.641 --rc geninfo_unexecuted_blocks=1 00:05:33.641 00:05:33.641 ' 00:05:33.641 20:58:34 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.641 --rc genhtml_branch_coverage=1 00:05:33.641 --rc genhtml_function_coverage=1 00:05:33.641 --rc genhtml_legend=1 00:05:33.641 --rc geninfo_all_blocks=1 00:05:33.641 --rc geninfo_unexecuted_blocks=1 00:05:33.641 00:05:33.641 ' 00:05:33.641 20:58:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:33.641 20:58:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:33.641 20:58:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:33.641 20:58:34 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:33.641 20:58:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.641 20:58:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.641 ************************************ 00:05:33.641 START TEST event_perf 00:05:33.641 ************************************ 00:05:33.641 20:58:34 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:33.641 Running I/O for 1 seconds...[2024-12-05 20:58:34.886669] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:33.641 [2024-12-05 20:58:34.886779] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1849405 ] 00:05:33.641 [2024-12-05 20:58:34.972535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.641 [2024-12-05 20:58:35.017477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.641 [2024-12-05 20:58:35.017592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.641 [2024-12-05 20:58:35.017749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.642 Running I/O for 1 seconds...[2024-12-05 20:58:35.017749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.024 00:05:35.024 lcore 0: 177950 00:05:35.024 lcore 1: 177946 00:05:35.024 lcore 2: 177947 00:05:35.024 lcore 3: 177949 00:05:35.024 done. 00:05:35.024 00:05:35.024 real 0m1.187s 00:05:35.024 user 0m4.101s 00:05:35.024 sys 0m0.081s 00:05:35.024 20:58:36 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.024 20:58:36 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.024 ************************************ 00:05:35.024 END TEST event_perf 00:05:35.024 ************************************ 00:05:35.024 20:58:36 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:35.024 20:58:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:35.024 20:58:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.024 20:58:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.024 ************************************ 00:05:35.024 START TEST event_reactor 00:05:35.024 ************************************ 00:05:35.024 20:58:36 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:35.024 [2024-12-05 20:58:36.134385] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:35.024 [2024-12-05 20:58:36.134473] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1849762 ] 00:05:35.024 [2024-12-05 20:58:36.216419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.024 [2024-12-05 20:58:36.251258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.968 test_start 00:05:35.968 oneshot 00:05:35.968 tick 100 00:05:35.968 tick 100 00:05:35.968 tick 250 00:05:35.968 tick 100 00:05:35.968 tick 100 00:05:35.968 tick 250 00:05:35.968 tick 100 00:05:35.968 tick 500 00:05:35.968 tick 100 00:05:35.968 tick 100 00:05:35.968 tick 250 00:05:35.968 tick 100 00:05:35.968 tick 100 00:05:35.968 test_end 00:05:35.968 00:05:35.968 real 0m1.170s 00:05:35.968 user 0m1.097s 00:05:35.968 sys 0m0.069s 00:05:35.968 20:58:37 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.968 20:58:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:35.968 ************************************ 00:05:35.968 END TEST event_reactor 00:05:35.968 ************************************ 00:05:35.968 20:58:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:35.968 20:58:37 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:35.968 20:58:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.968 20:58:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.968 ************************************ 00:05:35.968 START TEST event_reactor_perf 00:05:35.968 ************************************ 00:05:35.968 20:58:37 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:35.968 [2024-12-05 20:58:37.367647] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:35.968 [2024-12-05 20:58:37.367696] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850084 ] 00:05:36.228 [2024-12-05 20:58:37.443616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.228 [2024-12-05 20:58:37.477440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.169 test_start 00:05:37.169 test_end 00:05:37.169 Performance: 365725 events per second 00:05:37.169 00:05:37.169 real 0m1.150s 00:05:37.169 user 0m1.081s 00:05:37.169 sys 0m0.066s 00:05:37.169 20:58:38 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.169 20:58:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.169 ************************************ 00:05:37.169 END TEST event_reactor_perf 00:05:37.169 ************************************ 00:05:37.169 20:58:38 event -- event/event.sh@49 -- # uname -s 00:05:37.169 20:58:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:37.169 20:58:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:37.169 20:58:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.169 20:58:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.169 20:58:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.169 ************************************ 00:05:37.169 START TEST event_scheduler 00:05:37.169 ************************************ 00:05:37.169 20:58:38 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:37.431 * Looking for test storage... 00:05:37.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:37.431 20:58:38 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.431 20:58:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.431 20:58:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:37.431 20:58:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.431 20:58:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:37.432 20:58:38 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.432 20:58:38 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.432 20:58:38 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.432 20:58:38 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:37.432 20:58:38 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.432 20:58:38 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:37.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.432 --rc genhtml_branch_coverage=1 00:05:37.432 --rc genhtml_function_coverage=1 00:05:37.432 --rc genhtml_legend=1 00:05:37.432 --rc geninfo_all_blocks=1 00:05:37.432 --rc geninfo_unexecuted_blocks=1 00:05:37.432 00:05:37.432 ' 00:05:37.432 20:58:38 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:37.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.432 --rc genhtml_branch_coverage=1 00:05:37.432 --rc genhtml_function_coverage=1 00:05:37.432 --rc genhtml_legend=1 00:05:37.432 --rc geninfo_all_blocks=1 00:05:37.432 --rc geninfo_unexecuted_blocks=1 00:05:37.432 00:05:37.432 ' 00:05:37.432 20:58:38 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:37.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.432 --rc genhtml_branch_coverage=1 00:05:37.432 --rc genhtml_function_coverage=1 00:05:37.432 --rc genhtml_legend=1 00:05:37.432 --rc geninfo_all_blocks=1 00:05:37.432 --rc geninfo_unexecuted_blocks=1 00:05:37.432 00:05:37.432 ' 00:05:37.432 20:58:38 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:37.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.432 --rc genhtml_branch_coverage=1 00:05:37.432 --rc genhtml_function_coverage=1 00:05:37.432 --rc genhtml_legend=1 00:05:37.432 --rc geninfo_all_blocks=1 00:05:37.432 --rc geninfo_unexecuted_blocks=1 00:05:37.432 00:05:37.432 ' 00:05:37.432 20:58:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:37.432 20:58:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1850341 00:05:37.432 20:58:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.432 20:58:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:37.432 20:58:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1850341 00:05:37.432 20:58:38 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1850341 ']' 00:05:37.432 20:58:38 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.432 20:58:38 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.432 20:58:38 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.432 20:58:38 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.432 20:58:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.432 [2024-12-05 20:58:38.838302] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:37.432 [2024-12-05 20:58:38.838396] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850341 ] 00:05:37.693 [2024-12-05 20:58:38.914713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:37.693 [2024-12-05 20:58:38.954964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.693 [2024-12-05 20:58:38.955121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.693 [2024-12-05 20:58:38.955274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.693 [2024-12-05 20:58:38.955276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.264 20:58:39 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.264 20:58:39 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:38.264 20:58:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:38.264 20:58:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.264 20:58:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.264 [2024-12-05 20:58:39.657436] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:38.264 [2024-12-05 20:58:39.657451] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:38.264 [2024-12-05 20:58:39.657460] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:38.264 [2024-12-05 20:58:39.657464] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:38.264 [2024-12-05 20:58:39.657468] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:38.264 20:58:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.264 20:58:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:38.264 20:58:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.264 20:58:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.524 [2024-12-05 20:58:39.718376] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:38.524 20:58:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.524 20:58:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:38.524 20:58:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.524 20:58:39 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.524 20:58:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.524 ************************************ 00:05:38.524 START TEST scheduler_create_thread 00:05:38.524 ************************************ 00:05:38.524 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:38.524 20:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:38.524 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.524 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.524 2 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.525 3 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.525 4 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.525 5 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.525 6 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.525 7 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.525 8 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.525 9 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.525 20:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.095 10 00:05:39.095 20:58:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.095 20:58:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:39.095 20:58:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.095 20:58:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.483 20:58:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.483 20:58:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:40.483 20:58:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:40.483 20:58:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.483 20:58:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.053 20:58:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.053 20:58:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:41.053 20:58:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.053 20:58:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.990 20:58:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.990 20:58:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:41.990 20:58:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:41.990 20:58:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.990 20:58:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.561 20:58:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.561 00:05:42.561 real 0m4.223s 00:05:42.561 user 0m0.022s 00:05:42.561 sys 0m0.009s 00:05:42.561 20:58:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.561 20:58:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.561 ************************************ 00:05:42.561 END TEST scheduler_create_thread 00:05:42.561 ************************************ 00:05:42.821 20:58:44 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:42.821 20:58:44 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1850341 00:05:42.821 20:58:44 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1850341 ']' 00:05:42.821 20:58:44 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1850341 00:05:42.821 20:58:44 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:42.821 20:58:44 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.821 20:58:44 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1850341 00:05:42.821 20:58:44 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:42.821 20:58:44 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:42.821 20:58:44 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1850341' 00:05:42.821 killing process with pid 1850341 00:05:42.821 20:58:44 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1850341 00:05:42.821 20:58:44 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1850341 00:05:43.081 [2024-12-05 20:58:44.263657] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:43.081 00:05:43.081 real 0m5.833s 00:05:43.081 user 0m13.008s 00:05:43.081 sys 0m0.406s 00:05:43.081 20:58:44 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.081 20:58:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.081 ************************************ 00:05:43.081 END TEST event_scheduler 00:05:43.081 ************************************ 00:05:43.081 20:58:44 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:43.081 20:58:44 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:43.081 20:58:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.081 20:58:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.081 20:58:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.081 ************************************ 00:05:43.081 START TEST app_repeat 00:05:43.081 ************************************ 00:05:43.081 20:58:44 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:43.081 20:58:44 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.081 20:58:44 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.081 20:58:44 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:43.081 20:58:44 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.081 20:58:44 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:43.081 20:58:44 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:43.081 20:58:44 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:43.342 20:58:44 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1851570 00:05:43.342 20:58:44 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.342 20:58:44 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:43.342 20:58:44 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1851570' 00:05:43.342 Process app_repeat pid: 1851570 00:05:43.342 20:58:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.342 20:58:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:43.342 spdk_app_start Round 0 00:05:43.342 20:58:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1851570 /var/tmp/spdk-nbd.sock 00:05:43.342 20:58:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1851570 ']' 00:05:43.342 20:58:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.342 20:58:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.342 20:58:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.342 20:58:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.342 20:58:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.342 [2024-12-05 20:58:44.548904] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:43.342 [2024-12-05 20:58:44.548999] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851570 ] 00:05:43.342 [2024-12-05 20:58:44.629495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.342 [2024-12-05 20:58:44.668074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.342 [2024-12-05 20:58:44.668164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.342 20:58:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.342 20:58:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:43.342 20:58:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.602 Malloc0 00:05:43.602 20:58:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.861 Malloc1 00:05:43.861 20:58:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.861 20:58:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.861 20:58:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.861 20:58:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.861 20:58:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.861 20:58:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.861 20:58:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.861 20:58:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.861 20:58:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.861 20:58:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.861 20:58:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.861 20:58:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.861 20:58:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.861 20:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.861 20:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.861 20:58:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.122 /dev/nbd0 00:05:44.122 20:58:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.122 20:58:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.122 20:58:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:44.122 20:58:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.122 20:58:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.122 20:58:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.122 20:58:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:44.122 20:58:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.122 20:58:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.122 20:58:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.122 20:58:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.122 1+0 records in 00:05:44.122 1+0 records out 00:05:44.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286716 s, 14.3 MB/s 00:05:44.122 20:58:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.122 20:58:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.122 20:58:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.122 20:58:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.122 20:58:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.122 20:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.122 20:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.122 20:58:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.122 /dev/nbd1 00:05:44.382 20:58:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.382 20:58:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.382 20:58:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:44.382 20:58:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.382 20:58:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.382 20:58:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.382 20:58:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:44.382 20:58:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.382 20:58:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.382 20:58:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.382 20:58:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.382 1+0 records in 00:05:44.382 1+0 records out 00:05:44.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308239 s, 13.3 MB/s 00:05:44.382 20:58:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.382 20:58:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.382 20:58:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.382 20:58:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.382 20:58:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.382 20:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.382 20:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.382 20:58:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.382 20:58:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.382 20:58:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.383 { 00:05:44.383 "nbd_device": "/dev/nbd0", 00:05:44.383 "bdev_name": "Malloc0" 00:05:44.383 }, 00:05:44.383 { 00:05:44.383 "nbd_device": "/dev/nbd1", 00:05:44.383 "bdev_name": "Malloc1" 00:05:44.383 } 00:05:44.383 ]' 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.383 { 00:05:44.383 "nbd_device": "/dev/nbd0", 00:05:44.383 "bdev_name": "Malloc0" 00:05:44.383 }, 00:05:44.383 { 00:05:44.383 "nbd_device": "/dev/nbd1", 00:05:44.383 "bdev_name": "Malloc1" 00:05:44.383 } 00:05:44.383 ]' 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.383 /dev/nbd1' 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.383 /dev/nbd1' 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.383 20:58:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.643 256+0 records in 00:05:44.643 256+0 records out 00:05:44.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121534 s, 86.3 MB/s 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.643 256+0 records in 00:05:44.643 256+0 records out 00:05:44.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164953 s, 63.6 MB/s 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.643 256+0 records in 00:05:44.643 256+0 records out 00:05:44.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183385 s, 57.2 MB/s 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.643 20:58:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.643 20:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.903 20:58:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.163 20:58:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.163 20:58:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.163 20:58:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.163 20:58:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.163 20:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.163 20:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.163 20:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.163 20:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.163 20:58:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.163 20:58:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.163 20:58:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.163 20:58:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.163 20:58:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.423 20:58:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.423 [2024-12-05 20:58:46.787919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.423 [2024-12-05 20:58:46.824428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.423 [2024-12-05 20:58:46.824431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.423 [2024-12-05 20:58:46.856105] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.423 [2024-12-05 20:58:46.856141] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.722 20:58:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.722 20:58:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:48.722 spdk_app_start Round 1 00:05:48.722 20:58:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1851570 /var/tmp/spdk-nbd.sock 00:05:48.722 20:58:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1851570 ']' 00:05:48.722 20:58:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.722 20:58:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.722 20:58:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.722 20:58:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.722 20:58:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.722 20:58:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.722 20:58:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:48.722 20:58:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.722 Malloc0 00:05:48.722 20:58:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.983 Malloc1 00:05:48.983 20:58:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.983 /dev/nbd0 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.983 20:58:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:48.983 20:58:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.983 20:58:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.983 20:58:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.983 20:58:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:48.983 20:58:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.983 20:58:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.983 20:58:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.983 20:58:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.983 1+0 records in 00:05:48.983 1+0 records out 00:05:48.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288761 s, 14.2 MB/s 00:05:48.983 20:58:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.983 20:58:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.983 20:58:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.983 20:58:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.983 20:58:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.983 20:58:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.244 /dev/nbd1 00:05:49.244 20:58:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.244 20:58:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.244 20:58:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:49.244 20:58:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.244 20:58:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.244 20:58:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.244 20:58:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:49.244 20:58:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.244 20:58:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.244 20:58:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.244 20:58:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.244 1+0 records in 00:05:49.244 1+0 records out 00:05:49.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253327 s, 16.2 MB/s 00:05:49.244 20:58:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.244 20:58:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.244 20:58:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.244 20:58:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.244 20:58:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.244 20:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.244 20:58:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.244 20:58:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.244 20:58:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.244 20:58:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.504 20:58:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.504 { 00:05:49.504 "nbd_device": "/dev/nbd0", 00:05:49.504 "bdev_name": "Malloc0" 00:05:49.504 }, 00:05:49.504 { 00:05:49.504 "nbd_device": "/dev/nbd1", 00:05:49.504 "bdev_name": "Malloc1" 00:05:49.504 } 00:05:49.504 ]' 00:05:49.504 20:58:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.504 { 00:05:49.504 "nbd_device": "/dev/nbd0", 00:05:49.504 "bdev_name": "Malloc0" 00:05:49.504 }, 00:05:49.504 { 00:05:49.504 "nbd_device": "/dev/nbd1", 00:05:49.504 "bdev_name": "Malloc1" 00:05:49.504 } 00:05:49.504 ]' 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.505 /dev/nbd1' 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.505 /dev/nbd1' 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.505 256+0 records in 00:05:49.505 256+0 records out 00:05:49.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124222 s, 84.4 MB/s 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.505 256+0 records in 00:05:49.505 256+0 records out 00:05:49.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166398 s, 63.0 MB/s 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.505 256+0 records in 00:05:49.505 256+0 records out 00:05:49.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183094 s, 57.3 MB/s 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.505 20:58:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.764 20:58:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.764 20:58:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.764 20:58:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.764 20:58:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.764 20:58:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.764 20:58:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.764 20:58:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.764 20:58:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.764 20:58:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.764 20:58:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.764 20:58:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.764 20:58:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.764 20:58:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.764 20:58:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.764 20:58:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.764 20:58:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.764 20:58:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.764 20:58:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.024 20:58:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.024 20:58:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.024 20:58:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.024 20:58:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.024 20:58:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.024 20:58:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.024 20:58:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.024 20:58:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.024 20:58:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.024 20:58:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.024 20:58:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.285 20:58:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.285 20:58:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.285 20:58:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.285 20:58:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.285 20:58:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.285 20:58:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.285 20:58:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.285 20:58:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.285 20:58:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.285 20:58:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.285 20:58:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.285 20:58:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.285 20:58:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.545 20:58:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.545 [2024-12-05 20:58:51.849570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.545 [2024-12-05 20:58:51.885905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.545 [2024-12-05 20:58:51.885931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.545 [2024-12-05 20:58:51.918371] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.545 [2024-12-05 20:58:51.918408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.843 20:58:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.843 20:58:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:53.843 spdk_app_start Round 2 00:05:53.843 20:58:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1851570 /var/tmp/spdk-nbd.sock 00:05:53.843 20:58:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1851570 ']' 00:05:53.843 20:58:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.843 20:58:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.843 20:58:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.843 20:58:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.843 20:58:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.843 20:58:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.843 20:58:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.843 20:58:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.843 Malloc0 00:05:53.843 20:58:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.843 Malloc1 00:05:53.843 20:58:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.843 20:58:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.843 20:58:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.843 20:58:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.843 20:58:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.843 20:58:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.843 20:58:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.843 20:58:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.843 20:58:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.843 20:58:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.843 20:58:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.843 20:58:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.843 20:58:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.843 20:58:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.843 20:58:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.843 20:58:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.103 /dev/nbd0 00:05:54.103 20:58:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.103 20:58:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.103 20:58:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:54.103 20:58:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.103 20:58:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.103 20:58:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.103 20:58:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:54.103 20:58:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.103 20:58:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.103 20:58:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.103 20:58:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.103 1+0 records in 00:05:54.103 1+0 records out 00:05:54.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289673 s, 14.1 MB/s 00:05:54.103 20:58:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.103 20:58:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.103 20:58:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.103 20:58:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.103 20:58:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.103 20:58:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.103 20:58:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.103 20:58:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.364 /dev/nbd1 00:05:54.364 20:58:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.364 20:58:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.364 20:58:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.364 20:58:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.364 20:58:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.364 20:58:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.364 20:58:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.364 20:58:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.364 20:58:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.364 20:58:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.364 20:58:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.364 1+0 records in 00:05:54.364 1+0 records out 00:05:54.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246702 s, 16.6 MB/s 00:05:54.364 20:58:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.364 20:58:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.364 20:58:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.364 20:58:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.364 20:58:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.364 20:58:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.364 20:58:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.364 20:58:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.364 20:58:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.364 20:58:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.624 { 00:05:54.624 "nbd_device": "/dev/nbd0", 00:05:54.624 "bdev_name": "Malloc0" 00:05:54.624 }, 00:05:54.624 { 00:05:54.624 "nbd_device": "/dev/nbd1", 00:05:54.624 "bdev_name": "Malloc1" 00:05:54.624 } 00:05:54.624 ]' 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.624 { 00:05:54.624 "nbd_device": "/dev/nbd0", 00:05:54.624 "bdev_name": "Malloc0" 00:05:54.624 }, 00:05:54.624 { 00:05:54.624 "nbd_device": "/dev/nbd1", 00:05:54.624 "bdev_name": "Malloc1" 00:05:54.624 } 00:05:54.624 ]' 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.624 /dev/nbd1' 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.624 /dev/nbd1' 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.624 256+0 records in 00:05:54.624 256+0 records out 00:05:54.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00580566 s, 181 MB/s 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.624 256+0 records in 00:05:54.624 256+0 records out 00:05:54.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165009 s, 63.5 MB/s 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.624 256+0 records in 00:05:54.624 256+0 records out 00:05:54.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174435 s, 60.1 MB/s 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.624 20:58:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.884 20:58:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.884 20:58:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.884 20:58:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.884 20:58:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.884 20:58:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.884 20:58:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.884 20:58:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.884 20:58:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.884 20:58:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.884 20:58:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.144 20:58:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.405 20:58:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.405 20:58:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.405 20:58:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.405 20:58:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.405 20:58:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.405 20:58:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.405 20:58:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.405 20:58:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.405 20:58:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.405 20:58:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.405 20:58:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.666 [2024-12-05 20:58:56.882065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.666 [2024-12-05 20:58:56.918630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.666 [2024-12-05 20:58:56.918632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.666 [2024-12-05 20:58:56.950423] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.666 [2024-12-05 20:58:56.950458] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.352 20:58:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1851570 /var/tmp/spdk-nbd.sock 00:05:58.352 20:58:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1851570 ']' 00:05:58.352 20:58:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.352 20:58:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.352 20:58:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.352 20:58:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.352 20:58:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.613 20:58:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.613 20:58:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:58.613 20:58:59 event.app_repeat -- event/event.sh@39 -- # killprocess 1851570 00:05:58.613 20:58:59 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1851570 ']' 00:05:58.613 20:58:59 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1851570 00:05:58.613 20:58:59 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:58.613 20:58:59 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.613 20:58:59 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1851570 00:05:58.613 20:59:00 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.613 20:59:00 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.613 20:59:00 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1851570' 00:05:58.613 killing process with pid 1851570 00:05:58.613 20:59:00 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1851570 00:05:58.613 20:59:00 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1851570 00:05:58.873 spdk_app_start is called in Round 0. 00:05:58.873 Shutdown signal received, stop current app iteration 00:05:58.873 Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 reinitialization... 00:05:58.873 spdk_app_start is called in Round 1. 00:05:58.873 Shutdown signal received, stop current app iteration 00:05:58.873 Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 reinitialization... 00:05:58.873 spdk_app_start is called in Round 2. 00:05:58.873 Shutdown signal received, stop current app iteration 00:05:58.873 Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 reinitialization... 00:05:58.873 spdk_app_start is called in Round 3. 00:05:58.873 Shutdown signal received, stop current app iteration 00:05:58.873 20:59:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:58.873 20:59:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:58.873 00:05:58.873 real 0m15.597s 00:05:58.873 user 0m33.990s 00:05:58.873 sys 0m2.233s 00:05:58.873 20:59:00 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.873 20:59:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.873 ************************************ 00:05:58.873 END TEST app_repeat 00:05:58.873 ************************************ 00:05:58.873 20:59:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:58.873 20:59:00 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:58.873 20:59:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.873 20:59:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.873 20:59:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.873 ************************************ 00:05:58.873 START TEST cpu_locks 00:05:58.873 ************************************ 00:05:58.873 20:59:00 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:58.873 * Looking for test storage... 00:05:58.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:58.873 20:59:00 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:58.873 20:59:00 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:58.873 20:59:00 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:59.136 20:59:00 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.136 20:59:00 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:59.136 20:59:00 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.136 20:59:00 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:59.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.136 --rc genhtml_branch_coverage=1 00:05:59.136 --rc genhtml_function_coverage=1 00:05:59.136 --rc genhtml_legend=1 00:05:59.136 --rc geninfo_all_blocks=1 00:05:59.136 --rc geninfo_unexecuted_blocks=1 00:05:59.136 00:05:59.136 ' 00:05:59.136 20:59:00 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:59.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.136 --rc genhtml_branch_coverage=1 00:05:59.136 --rc genhtml_function_coverage=1 00:05:59.136 --rc genhtml_legend=1 00:05:59.136 --rc geninfo_all_blocks=1 00:05:59.136 --rc geninfo_unexecuted_blocks=1 00:05:59.136 00:05:59.136 ' 00:05:59.136 20:59:00 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:59.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.136 --rc genhtml_branch_coverage=1 00:05:59.136 --rc genhtml_function_coverage=1 00:05:59.136 --rc genhtml_legend=1 00:05:59.136 --rc geninfo_all_blocks=1 00:05:59.136 --rc geninfo_unexecuted_blocks=1 00:05:59.136 00:05:59.136 ' 00:05:59.136 20:59:00 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:59.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.136 --rc genhtml_branch_coverage=1 00:05:59.136 --rc genhtml_function_coverage=1 00:05:59.136 --rc genhtml_legend=1 00:05:59.136 --rc geninfo_all_blocks=1 00:05:59.136 --rc geninfo_unexecuted_blocks=1 00:05:59.136 00:05:59.136 ' 00:05:59.136 20:59:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:59.136 20:59:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:59.136 20:59:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:59.136 20:59:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:59.136 20:59:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.136 20:59:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.136 20:59:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.136 ************************************ 00:05:59.136 START TEST default_locks 00:05:59.136 ************************************ 00:05:59.136 20:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:59.136 20:59:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1854845 00:05:59.136 20:59:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1854845 00:05:59.136 20:59:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.136 20:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1854845 ']' 00:05:59.136 20:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.136 20:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.136 20:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.136 20:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.136 20:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.136 [2024-12-05 20:59:00.479567] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:05:59.136 [2024-12-05 20:59:00.479628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1854845 ] 00:05:59.136 [2024-12-05 20:59:00.563667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.398 [2024-12-05 20:59:00.606150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.967 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.967 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:59.967 20:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1854845 00:05:59.967 20:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1854845 00:05:59.967 20:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.538 lslocks: write error 00:06:00.538 20:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1854845 00:06:00.538 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1854845 ']' 00:06:00.538 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1854845 00:06:00.538 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:00.538 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.538 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1854845 00:06:00.538 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.538 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.538 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1854845' 00:06:00.538 killing process with pid 1854845 00:06:00.539 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1854845 00:06:00.539 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1854845 00:06:00.801 20:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1854845 00:06:00.801 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:00.801 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1854845 00:06:00.801 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:00.801 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.801 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:00.801 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.801 20:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1854845 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1854845 ']' 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1854845) - No such process 00:06:00.801 ERROR: process (pid: 1854845) is no longer running 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:00.801 00:06:00.801 real 0m1.588s 00:06:00.801 user 0m1.690s 00:06:00.801 sys 0m0.567s 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.801 20:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.801 ************************************ 00:06:00.801 END TEST default_locks 00:06:00.801 ************************************ 00:06:00.801 20:59:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:00.801 20:59:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.801 20:59:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.801 20:59:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.801 ************************************ 00:06:00.801 START TEST default_locks_via_rpc 00:06:00.801 ************************************ 00:06:00.801 20:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:00.801 20:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1855213 00:06:00.801 20:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1855213 00:06:00.801 20:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.801 20:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1855213 ']' 00:06:00.801 20:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.801 20:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.801 20:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.801 20:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.801 20:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.801 [2024-12-05 20:59:02.135922] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:00.801 [2024-12-05 20:59:02.135980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1855213 ] 00:06:00.801 [2024-12-05 20:59:02.216690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.062 [2024-12-05 20:59:02.257046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1855213 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1855213 00:06:01.634 20:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.206 20:59:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1855213 00:06:02.206 20:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1855213 ']' 00:06:02.206 20:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1855213 00:06:02.206 20:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:02.206 20:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.206 20:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1855213 00:06:02.206 20:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.206 20:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.206 20:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1855213' 00:06:02.206 killing process with pid 1855213 00:06:02.206 20:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1855213 00:06:02.206 20:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1855213 00:06:02.206 00:06:02.206 real 0m1.538s 00:06:02.206 user 0m1.648s 00:06:02.206 sys 0m0.537s 00:06:02.206 20:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.206 20:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.206 ************************************ 00:06:02.206 END TEST default_locks_via_rpc 00:06:02.206 ************************************ 00:06:02.467 20:59:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:02.467 20:59:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.467 20:59:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.467 20:59:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.467 ************************************ 00:06:02.467 START TEST non_locking_app_on_locked_coremask 00:06:02.467 ************************************ 00:06:02.467 20:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:02.467 20:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1855572 00:06:02.467 20:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1855572 /var/tmp/spdk.sock 00:06:02.467 20:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.467 20:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1855572 ']' 00:06:02.467 20:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.467 20:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.467 20:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.467 20:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.467 20:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.467 [2024-12-05 20:59:03.745959] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:02.467 [2024-12-05 20:59:03.746007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1855572 ] 00:06:02.467 [2024-12-05 20:59:03.822942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.467 [2024-12-05 20:59:03.858987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.410 20:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.410 20:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.410 20:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1855905 00:06:03.410 20:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1855905 /var/tmp/spdk2.sock 00:06:03.410 20:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1855905 ']' 00:06:03.410 20:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:03.410 20:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.410 20:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.410 20:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.410 20:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.410 20:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.410 [2024-12-05 20:59:04.595173] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:03.410 [2024-12-05 20:59:04.595231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1855905 ] 00:06:03.410 [2024-12-05 20:59:04.714715] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.410 [2024-12-05 20:59:04.714744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.410 [2024-12-05 20:59:04.787268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.980 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.980 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.980 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1855572 00:06:03.980 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1855572 00:06:03.980 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.550 lslocks: write error 00:06:04.550 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1855572 00:06:04.550 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1855572 ']' 00:06:04.550 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1855572 00:06:04.550 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.550 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.550 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1855572 00:06:04.550 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.550 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.550 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1855572' 00:06:04.550 killing process with pid 1855572 00:06:04.550 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1855572 00:06:04.550 20:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1855572 00:06:05.121 20:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1855905 00:06:05.121 20:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1855905 ']' 00:06:05.121 20:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1855905 00:06:05.121 20:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.121 20:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.121 20:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1855905 00:06:05.121 20:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.121 20:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.121 20:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1855905' 00:06:05.121 killing process with pid 1855905 00:06:05.121 20:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1855905 00:06:05.121 20:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1855905 00:06:05.381 00:06:05.381 real 0m2.939s 00:06:05.381 user 0m3.265s 00:06:05.381 sys 0m0.875s 00:06:05.381 20:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.381 20:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.381 ************************************ 00:06:05.381 END TEST non_locking_app_on_locked_coremask 00:06:05.381 ************************************ 00:06:05.381 20:59:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:05.381 20:59:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.381 20:59:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.381 20:59:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.381 ************************************ 00:06:05.381 START TEST locking_app_on_unlocked_coremask 00:06:05.381 ************************************ 00:06:05.381 20:59:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:05.381 20:59:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1856278 00:06:05.381 20:59:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1856278 /var/tmp/spdk.sock 00:06:05.381 20:59:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:05.381 20:59:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1856278 ']' 00:06:05.381 20:59:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.381 20:59:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.381 20:59:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.381 20:59:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.381 20:59:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.381 [2024-12-05 20:59:06.761610] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:05.381 [2024-12-05 20:59:06.761659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856278 ] 00:06:05.642 [2024-12-05 20:59:06.840699] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.642 [2024-12-05 20:59:06.840726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.642 [2024-12-05 20:59:06.875226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.213 20:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.213 20:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:06.213 20:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1856466 00:06:06.213 20:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1856466 /var/tmp/spdk2.sock 00:06:06.213 20:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1856466 ']' 00:06:06.213 20:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.213 20:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.213 20:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.213 20:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.213 20:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.213 20:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.213 [2024-12-05 20:59:07.617200] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:06.213 [2024-12-05 20:59:07.617256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856466 ] 00:06:06.474 [2024-12-05 20:59:07.739076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.474 [2024-12-05 20:59:07.816246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.044 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.044 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:07.044 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1856466 00:06:07.044 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1856466 00:06:07.044 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.613 lslocks: write error 00:06:07.613 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1856278 00:06:07.613 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1856278 ']' 00:06:07.613 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1856278 00:06:07.613 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:07.613 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.613 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1856278 00:06:07.613 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.613 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.613 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1856278' 00:06:07.613 killing process with pid 1856278 00:06:07.613 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1856278 00:06:07.614 20:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1856278 00:06:08.184 20:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1856466 00:06:08.184 20:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1856466 ']' 00:06:08.184 20:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1856466 00:06:08.184 20:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:08.184 20:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.184 20:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1856466 00:06:08.184 20:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.184 20:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.184 20:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1856466' 00:06:08.184 killing process with pid 1856466 00:06:08.184 20:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1856466 00:06:08.184 20:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1856466 00:06:08.445 00:06:08.445 real 0m2.950s 00:06:08.445 user 0m3.270s 00:06:08.445 sys 0m0.906s 00:06:08.445 20:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.445 20:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.445 ************************************ 00:06:08.445 END TEST locking_app_on_unlocked_coremask 00:06:08.445 ************************************ 00:06:08.445 20:59:09 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:08.445 20:59:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.445 20:59:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.445 20:59:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.445 ************************************ 00:06:08.445 START TEST locking_app_on_locked_coremask 00:06:08.445 ************************************ 00:06:08.445 20:59:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:08.445 20:59:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1856989 00:06:08.445 20:59:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1856989 /var/tmp/spdk.sock 00:06:08.445 20:59:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.445 20:59:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1856989 ']' 00:06:08.445 20:59:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.446 20:59:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.446 20:59:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.446 20:59:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.446 20:59:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.446 [2024-12-05 20:59:09.789277] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:08.446 [2024-12-05 20:59:09.789329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856989 ] 00:06:08.446 [2024-12-05 20:59:09.868951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.706 [2024-12-05 20:59:09.907674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1857011 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1857011 /var/tmp/spdk2.sock 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1857011 /var/tmp/spdk2.sock 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1857011 /var/tmp/spdk2.sock 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1857011 ']' 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.276 20:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.276 [2024-12-05 20:59:10.630169] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:09.276 [2024-12-05 20:59:10.630223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857011 ] 00:06:09.536 [2024-12-05 20:59:10.753811] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1856989 has claimed it. 00:06:09.536 [2024-12-05 20:59:10.753859] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1857011) - No such process 00:06:10.105 ERROR: process (pid: 1857011) is no longer running 00:06:10.105 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.105 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:10.105 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:10.105 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.105 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.105 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.105 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1856989 00:06:10.105 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1856989 00:06:10.105 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.365 lslocks: write error 00:06:10.365 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1856989 00:06:10.365 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1856989 ']' 00:06:10.365 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1856989 00:06:10.365 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.365 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.365 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1856989 00:06:10.365 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.365 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.365 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1856989' 00:06:10.365 killing process with pid 1856989 00:06:10.365 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1856989 00:06:10.365 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1856989 00:06:10.625 00:06:10.625 real 0m2.223s 00:06:10.625 user 0m2.507s 00:06:10.625 sys 0m0.603s 00:06:10.625 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.625 20:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.625 ************************************ 00:06:10.625 END TEST locking_app_on_locked_coremask 00:06:10.625 ************************************ 00:06:10.625 20:59:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.625 20:59:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.625 20:59:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.625 20:59:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.625 ************************************ 00:06:10.625 START TEST locking_overlapped_coremask 00:06:10.625 ************************************ 00:06:10.625 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:10.625 20:59:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1857368 00:06:10.625 20:59:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1857368 /var/tmp/spdk.sock 00:06:10.625 20:59:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:10.625 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1857368 ']' 00:06:10.625 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.625 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.625 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.625 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.625 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.886 [2024-12-05 20:59:12.083033] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:10.886 [2024-12-05 20:59:12.083091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857368 ] 00:06:10.886 [2024-12-05 20:59:12.163368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.886 [2024-12-05 20:59:12.205239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.886 [2024-12-05 20:59:12.205356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.886 [2024-12-05 20:59:12.205359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.457 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.457 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.457 20:59:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1857677 00:06:11.457 20:59:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1857677 /var/tmp/spdk2.sock 00:06:11.457 20:59:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:11.457 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:11.457 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1857677 /var/tmp/spdk2.sock 00:06:11.457 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:11.457 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.457 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:11.457 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.457 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1857677 /var/tmp/spdk2.sock 00:06:11.457 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1857677 ']' 00:06:11.457 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.458 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.458 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.458 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.458 20:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.718 [2024-12-05 20:59:12.926951] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:11.718 [2024-12-05 20:59:12.927006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857677 ] 00:06:11.718 [2024-12-05 20:59:13.024783] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1857368 has claimed it. 00:06:11.718 [2024-12-05 20:59:13.024815] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1857677) - No such process 00:06:12.288 ERROR: process (pid: 1857677) is no longer running 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1857368 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1857368 ']' 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1857368 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1857368 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1857368' 00:06:12.288 killing process with pid 1857368 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1857368 00:06:12.288 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1857368 00:06:12.548 00:06:12.548 real 0m1.780s 00:06:12.548 user 0m5.112s 00:06:12.548 sys 0m0.392s 00:06:12.548 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.548 20:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.548 ************************************ 00:06:12.548 END TEST locking_overlapped_coremask 00:06:12.548 ************************************ 00:06:12.548 20:59:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:12.548 20:59:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.548 20:59:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.548 20:59:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.548 ************************************ 00:06:12.548 START TEST locking_overlapped_coremask_via_rpc 00:06:12.548 ************************************ 00:06:12.548 20:59:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:12.548 20:59:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1857744 00:06:12.548 20:59:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1857744 /var/tmp/spdk.sock 00:06:12.548 20:59:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:12.548 20:59:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1857744 ']' 00:06:12.548 20:59:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.548 20:59:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.548 20:59:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.548 20:59:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.548 20:59:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.549 [2024-12-05 20:59:13.953386] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:12.549 [2024-12-05 20:59:13.953443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857744 ] 00:06:12.808 [2024-12-05 20:59:14.039069] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.808 [2024-12-05 20:59:14.039100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.808 [2024-12-05 20:59:14.082403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.808 [2024-12-05 20:59:14.082519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.808 [2024-12-05 20:59:14.082522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.378 20:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.378 20:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.378 20:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1858076 00:06:13.378 20:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1858076 /var/tmp/spdk2.sock 00:06:13.378 20:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1858076 ']' 00:06:13.378 20:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:13.378 20:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.378 20:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.378 20:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.378 20:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.378 20:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.378 [2024-12-05 20:59:14.781787] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:13.379 [2024-12-05 20:59:14.781843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858076 ] 00:06:13.638 [2024-12-05 20:59:14.881534] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.638 [2024-12-05 20:59:14.881556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.638 [2024-12-05 20:59:14.940728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.638 [2024-12-05 20:59:14.943983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.638 [2024-12-05 20:59:14.943985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.207 [2024-12-05 20:59:15.580923] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1857744 has claimed it. 00:06:14.207 request: 00:06:14.207 { 00:06:14.207 "method": "framework_enable_cpumask_locks", 00:06:14.207 "req_id": 1 00:06:14.207 } 00:06:14.207 Got JSON-RPC error response 00:06:14.207 response: 00:06:14.207 { 00:06:14.207 "code": -32603, 00:06:14.207 "message": "Failed to claim CPU core: 2" 00:06:14.207 } 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:14.207 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:14.208 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.208 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.208 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.208 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1857744 /var/tmp/spdk.sock 00:06:14.208 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1857744 ']' 00:06:14.208 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.208 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.208 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.208 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.208 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.468 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.468 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.468 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1858076 /var/tmp/spdk2.sock 00:06:14.468 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1858076 ']' 00:06:14.468 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.468 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.468 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.468 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.468 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.728 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.728 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.728 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:14.728 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:14.728 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:14.728 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:14.728 00:06:14.728 real 0m2.073s 00:06:14.728 user 0m0.852s 00:06:14.728 sys 0m0.151s 00:06:14.728 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.728 20:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.728 ************************************ 00:06:14.728 END TEST locking_overlapped_coremask_via_rpc 00:06:14.728 ************************************ 00:06:14.728 20:59:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:14.728 20:59:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1857744 ]] 00:06:14.728 20:59:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1857744 00:06:14.728 20:59:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1857744 ']' 00:06:14.728 20:59:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1857744 00:06:14.728 20:59:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:14.728 20:59:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.728 20:59:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1857744 00:06:14.728 20:59:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.728 20:59:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.728 20:59:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1857744' 00:06:14.728 killing process with pid 1857744 00:06:14.728 20:59:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1857744 00:06:14.728 20:59:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1857744 00:06:14.987 20:59:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1858076 ]] 00:06:14.987 20:59:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1858076 00:06:14.987 20:59:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1858076 ']' 00:06:14.987 20:59:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1858076 00:06:14.987 20:59:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:14.987 20:59:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.987 20:59:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1858076 00:06:14.987 20:59:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:14.987 20:59:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:14.987 20:59:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1858076' 00:06:14.987 killing process with pid 1858076 00:06:14.987 20:59:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1858076 00:06:14.987 20:59:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1858076 00:06:15.248 20:59:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:15.248 20:59:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:15.248 20:59:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1857744 ]] 00:06:15.248 20:59:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1857744 00:06:15.248 20:59:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1857744 ']' 00:06:15.248 20:59:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1857744 00:06:15.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1857744) - No such process 00:06:15.248 20:59:16 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1857744 is not found' 00:06:15.248 Process with pid 1857744 is not found 00:06:15.248 20:59:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1858076 ]] 00:06:15.248 20:59:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1858076 00:06:15.248 20:59:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1858076 ']' 00:06:15.248 20:59:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1858076 00:06:15.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1858076) - No such process 00:06:15.248 20:59:16 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1858076 is not found' 00:06:15.248 Process with pid 1858076 is not found 00:06:15.248 20:59:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:15.248 00:06:15.248 real 0m16.358s 00:06:15.248 user 0m28.404s 00:06:15.248 sys 0m4.993s 00:06:15.248 20:59:16 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.248 20:59:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.248 ************************************ 00:06:15.248 END TEST cpu_locks 00:06:15.248 ************************************ 00:06:15.248 00:06:15.248 real 0m41.950s 00:06:15.248 user 1m21.951s 00:06:15.248 sys 0m8.268s 00:06:15.248 20:59:16 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.248 20:59:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.248 ************************************ 00:06:15.248 END TEST event 00:06:15.248 ************************************ 00:06:15.248 20:59:16 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:15.248 20:59:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.248 20:59:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.248 20:59:16 -- common/autotest_common.sh@10 -- # set +x 00:06:15.248 ************************************ 00:06:15.248 START TEST thread 00:06:15.248 ************************************ 00:06:15.248 20:59:16 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:15.509 * Looking for test storage... 00:06:15.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:15.509 20:59:16 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:15.509 20:59:16 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:15.509 20:59:16 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:15.509 20:59:16 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:15.509 20:59:16 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.509 20:59:16 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.509 20:59:16 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.509 20:59:16 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.509 20:59:16 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.509 20:59:16 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.509 20:59:16 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.509 20:59:16 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.509 20:59:16 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.509 20:59:16 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.509 20:59:16 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.509 20:59:16 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:15.509 20:59:16 thread -- scripts/common.sh@345 -- # : 1 00:06:15.509 20:59:16 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.509 20:59:16 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.509 20:59:16 thread -- scripts/common.sh@365 -- # decimal 1 00:06:15.509 20:59:16 thread -- scripts/common.sh@353 -- # local d=1 00:06:15.509 20:59:16 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.509 20:59:16 thread -- scripts/common.sh@355 -- # echo 1 00:06:15.509 20:59:16 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.509 20:59:16 thread -- scripts/common.sh@366 -- # decimal 2 00:06:15.509 20:59:16 thread -- scripts/common.sh@353 -- # local d=2 00:06:15.509 20:59:16 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.509 20:59:16 thread -- scripts/common.sh@355 -- # echo 2 00:06:15.509 20:59:16 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.509 20:59:16 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.509 20:59:16 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.509 20:59:16 thread -- scripts/common.sh@368 -- # return 0 00:06:15.509 20:59:16 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.509 20:59:16 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:15.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.509 --rc genhtml_branch_coverage=1 00:06:15.509 --rc genhtml_function_coverage=1 00:06:15.509 --rc genhtml_legend=1 00:06:15.509 --rc geninfo_all_blocks=1 00:06:15.509 --rc geninfo_unexecuted_blocks=1 00:06:15.509 00:06:15.509 ' 00:06:15.509 20:59:16 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:15.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.509 --rc genhtml_branch_coverage=1 00:06:15.509 --rc genhtml_function_coverage=1 00:06:15.509 --rc genhtml_legend=1 00:06:15.509 --rc geninfo_all_blocks=1 00:06:15.509 --rc geninfo_unexecuted_blocks=1 00:06:15.509 00:06:15.509 ' 00:06:15.509 20:59:16 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:15.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.509 --rc genhtml_branch_coverage=1 00:06:15.509 --rc genhtml_function_coverage=1 00:06:15.509 --rc genhtml_legend=1 00:06:15.509 --rc geninfo_all_blocks=1 00:06:15.509 --rc geninfo_unexecuted_blocks=1 00:06:15.509 00:06:15.509 ' 00:06:15.509 20:59:16 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:15.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.509 --rc genhtml_branch_coverage=1 00:06:15.509 --rc genhtml_function_coverage=1 00:06:15.509 --rc genhtml_legend=1 00:06:15.509 --rc geninfo_all_blocks=1 00:06:15.509 --rc geninfo_unexecuted_blocks=1 00:06:15.509 00:06:15.509 ' 00:06:15.509 20:59:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.509 20:59:16 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:15.509 20:59:16 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.509 20:59:16 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.509 ************************************ 00:06:15.509 START TEST thread_poller_perf 00:06:15.509 ************************************ 00:06:15.509 20:59:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.509 [2024-12-05 20:59:16.912169] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:15.509 [2024-12-05 20:59:16.912257] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858522 ] 00:06:15.769 [2024-12-05 20:59:16.997022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.769 [2024-12-05 20:59:17.038273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.769 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:16.710 [2024-12-05T19:59:18.147Z] ====================================== 00:06:16.710 [2024-12-05T19:59:18.147Z] busy:2413656528 (cyc) 00:06:16.710 [2024-12-05T19:59:18.147Z] total_run_count: 287000 00:06:16.710 [2024-12-05T19:59:18.147Z] tsc_hz: 2400000000 (cyc) 00:06:16.710 [2024-12-05T19:59:18.147Z] ====================================== 00:06:16.710 [2024-12-05T19:59:18.147Z] poller_cost: 8409 (cyc), 3503 (nsec) 00:06:16.710 00:06:16.710 real 0m1.190s 00:06:16.710 user 0m1.112s 00:06:16.710 sys 0m0.073s 00:06:16.710 20:59:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.710 20:59:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.710 ************************************ 00:06:16.710 END TEST thread_poller_perf 00:06:16.710 ************************************ 00:06:16.710 20:59:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.710 20:59:18 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:16.710 20:59:18 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.710 20:59:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.970 ************************************ 00:06:16.970 START TEST thread_poller_perf 00:06:16.970 ************************************ 00:06:16.970 20:59:18 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.970 [2024-12-05 20:59:18.179519] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:16.970 [2024-12-05 20:59:18.179609] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858873 ] 00:06:16.970 [2024-12-05 20:59:18.262732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.970 [2024-12-05 20:59:18.299779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.970 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:17.909 [2024-12-05T19:59:19.346Z] ====================================== 00:06:17.909 [2024-12-05T19:59:19.346Z] busy:2401803072 (cyc) 00:06:17.909 [2024-12-05T19:59:19.346Z] total_run_count: 3501000 00:06:17.909 [2024-12-05T19:59:19.346Z] tsc_hz: 2400000000 (cyc) 00:06:17.909 [2024-12-05T19:59:19.346Z] ====================================== 00:06:17.909 [2024-12-05T19:59:19.346Z] poller_cost: 686 (cyc), 285 (nsec) 00:06:17.909 00:06:17.909 real 0m1.175s 00:06:17.909 user 0m1.106s 00:06:17.909 sys 0m0.065s 00:06:17.909 20:59:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.909 20:59:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.909 ************************************ 00:06:17.909 END TEST thread_poller_perf 00:06:17.909 ************************************ 00:06:18.169 20:59:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:18.169 00:06:18.169 real 0m2.712s 00:06:18.169 user 0m2.391s 00:06:18.169 sys 0m0.332s 00:06:18.169 20:59:19 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.169 20:59:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.169 ************************************ 00:06:18.169 END TEST thread 00:06:18.169 ************************************ 00:06:18.169 20:59:19 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:18.169 20:59:19 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:18.169 20:59:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.169 20:59:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.169 20:59:19 -- common/autotest_common.sh@10 -- # set +x 00:06:18.169 ************************************ 00:06:18.169 START TEST app_cmdline 00:06:18.169 ************************************ 00:06:18.169 20:59:19 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:18.169 * Looking for test storage... 00:06:18.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:18.169 20:59:19 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:18.169 20:59:19 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:18.169 20:59:19 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:18.430 20:59:19 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.430 20:59:19 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:18.430 20:59:19 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.430 20:59:19 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:18.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.430 --rc genhtml_branch_coverage=1 00:06:18.430 --rc genhtml_function_coverage=1 00:06:18.430 --rc genhtml_legend=1 00:06:18.430 --rc geninfo_all_blocks=1 00:06:18.430 --rc geninfo_unexecuted_blocks=1 00:06:18.430 00:06:18.430 ' 00:06:18.430 20:59:19 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:18.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.430 --rc genhtml_branch_coverage=1 00:06:18.430 --rc genhtml_function_coverage=1 00:06:18.430 --rc genhtml_legend=1 00:06:18.430 --rc geninfo_all_blocks=1 00:06:18.430 --rc geninfo_unexecuted_blocks=1 00:06:18.430 00:06:18.430 ' 00:06:18.430 20:59:19 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:18.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.430 --rc genhtml_branch_coverage=1 00:06:18.430 --rc genhtml_function_coverage=1 00:06:18.430 --rc genhtml_legend=1 00:06:18.430 --rc geninfo_all_blocks=1 00:06:18.430 --rc geninfo_unexecuted_blocks=1 00:06:18.430 00:06:18.430 ' 00:06:18.430 20:59:19 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:18.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.430 --rc genhtml_branch_coverage=1 00:06:18.430 --rc genhtml_function_coverage=1 00:06:18.430 --rc genhtml_legend=1 00:06:18.430 --rc geninfo_all_blocks=1 00:06:18.430 --rc geninfo_unexecuted_blocks=1 00:06:18.430 00:06:18.430 ' 00:06:18.430 20:59:19 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:18.430 20:59:19 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1859255 00:06:18.430 20:59:19 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1859255 00:06:18.430 20:59:19 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1859255 ']' 00:06:18.430 20:59:19 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:18.430 20:59:19 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.430 20:59:19 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.430 20:59:19 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.430 20:59:19 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.430 20:59:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.430 [2024-12-05 20:59:19.708549] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:18.430 [2024-12-05 20:59:19.708614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859255 ] 00:06:18.430 [2024-12-05 20:59:19.788637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.430 [2024-12-05 20:59:19.824958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.689 20:59:20 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.690 20:59:20 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:18.690 20:59:20 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:18.949 { 00:06:18.949 "version": "SPDK v25.01-pre git sha1 a333974e5", 00:06:18.949 "fields": { 00:06:18.949 "major": 25, 00:06:18.949 "minor": 1, 00:06:18.949 "patch": 0, 00:06:18.949 "suffix": "-pre", 00:06:18.949 "commit": "a333974e5" 00:06:18.949 } 00:06:18.949 } 00:06:18.949 20:59:20 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:18.949 20:59:20 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:18.949 20:59:20 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:18.949 20:59:20 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:18.949 20:59:20 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:18.949 20:59:20 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:18.949 20:59:20 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.949 20:59:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.949 20:59:20 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:18.949 20:59:20 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.949 20:59:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:18.949 20:59:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:18.949 20:59:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:18.949 20:59:20 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:18.949 20:59:20 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:18.949 20:59:20 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:18.949 20:59:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.949 20:59:20 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:18.949 20:59:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.949 20:59:20 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:18.949 20:59:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.949 20:59:20 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:18.949 20:59:20 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:18.949 20:59:20 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.210 request: 00:06:19.210 { 00:06:19.210 "method": "env_dpdk_get_mem_stats", 00:06:19.210 "req_id": 1 00:06:19.210 } 00:06:19.210 Got JSON-RPC error response 00:06:19.210 response: 00:06:19.210 { 00:06:19.210 "code": -32601, 00:06:19.210 "message": "Method not found" 00:06:19.210 } 00:06:19.210 20:59:20 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:19.210 20:59:20 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.210 20:59:20 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.210 20:59:20 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.210 20:59:20 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1859255 00:06:19.210 20:59:20 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1859255 ']' 00:06:19.210 20:59:20 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1859255 00:06:19.210 20:59:20 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:19.210 20:59:20 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.210 20:59:20 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1859255 00:06:19.210 20:59:20 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.210 20:59:20 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.210 20:59:20 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1859255' 00:06:19.210 killing process with pid 1859255 00:06:19.210 20:59:20 app_cmdline -- common/autotest_common.sh@973 -- # kill 1859255 00:06:19.210 20:59:20 app_cmdline -- common/autotest_common.sh@978 -- # wait 1859255 00:06:19.471 00:06:19.471 real 0m1.254s 00:06:19.471 user 0m1.539s 00:06:19.471 sys 0m0.435s 00:06:19.471 20:59:20 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.471 20:59:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:19.471 ************************************ 00:06:19.471 END TEST app_cmdline 00:06:19.471 ************************************ 00:06:19.471 20:59:20 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:19.471 20:59:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.471 20:59:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.471 20:59:20 -- common/autotest_common.sh@10 -- # set +x 00:06:19.471 ************************************ 00:06:19.471 START TEST version 00:06:19.471 ************************************ 00:06:19.471 20:59:20 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:19.471 * Looking for test storage... 00:06:19.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:19.471 20:59:20 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.471 20:59:20 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.471 20:59:20 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.732 20:59:20 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.732 20:59:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.732 20:59:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.732 20:59:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.732 20:59:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.732 20:59:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.732 20:59:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.732 20:59:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.732 20:59:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.732 20:59:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.732 20:59:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.732 20:59:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.732 20:59:20 version -- scripts/common.sh@344 -- # case "$op" in 00:06:19.732 20:59:20 version -- scripts/common.sh@345 -- # : 1 00:06:19.732 20:59:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.732 20:59:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.732 20:59:20 version -- scripts/common.sh@365 -- # decimal 1 00:06:19.732 20:59:20 version -- scripts/common.sh@353 -- # local d=1 00:06:19.732 20:59:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.732 20:59:20 version -- scripts/common.sh@355 -- # echo 1 00:06:19.732 20:59:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.732 20:59:20 version -- scripts/common.sh@366 -- # decimal 2 00:06:19.732 20:59:20 version -- scripts/common.sh@353 -- # local d=2 00:06:19.732 20:59:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.732 20:59:20 version -- scripts/common.sh@355 -- # echo 2 00:06:19.732 20:59:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.732 20:59:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.732 20:59:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.732 20:59:20 version -- scripts/common.sh@368 -- # return 0 00:06:19.732 20:59:20 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.732 20:59:20 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.732 --rc genhtml_branch_coverage=1 00:06:19.732 --rc genhtml_function_coverage=1 00:06:19.732 --rc genhtml_legend=1 00:06:19.732 --rc geninfo_all_blocks=1 00:06:19.732 --rc geninfo_unexecuted_blocks=1 00:06:19.732 00:06:19.732 ' 00:06:19.732 20:59:20 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.732 --rc genhtml_branch_coverage=1 00:06:19.732 --rc genhtml_function_coverage=1 00:06:19.732 --rc genhtml_legend=1 00:06:19.732 --rc geninfo_all_blocks=1 00:06:19.732 --rc geninfo_unexecuted_blocks=1 00:06:19.732 00:06:19.732 ' 00:06:19.732 20:59:20 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.732 --rc genhtml_branch_coverage=1 00:06:19.732 --rc genhtml_function_coverage=1 00:06:19.732 --rc genhtml_legend=1 00:06:19.732 --rc geninfo_all_blocks=1 00:06:19.732 --rc geninfo_unexecuted_blocks=1 00:06:19.732 00:06:19.732 ' 00:06:19.732 20:59:20 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.732 --rc genhtml_branch_coverage=1 00:06:19.732 --rc genhtml_function_coverage=1 00:06:19.732 --rc genhtml_legend=1 00:06:19.732 --rc geninfo_all_blocks=1 00:06:19.732 --rc geninfo_unexecuted_blocks=1 00:06:19.732 00:06:19.732 ' 00:06:19.732 20:59:20 version -- app/version.sh@17 -- # get_header_version major 00:06:19.732 20:59:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.732 20:59:20 version -- app/version.sh@14 -- # cut -f2 00:06:19.732 20:59:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.732 20:59:20 version -- app/version.sh@17 -- # major=25 00:06:19.732 20:59:20 version -- app/version.sh@18 -- # get_header_version minor 00:06:19.732 20:59:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.732 20:59:20 version -- app/version.sh@14 -- # cut -f2 00:06:19.732 20:59:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.732 20:59:20 version -- app/version.sh@18 -- # minor=1 00:06:19.732 20:59:20 version -- app/version.sh@19 -- # get_header_version patch 00:06:19.732 20:59:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.732 20:59:20 version -- app/version.sh@14 -- # cut -f2 00:06:19.732 20:59:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.732 20:59:21 version -- app/version.sh@19 -- # patch=0 00:06:19.732 20:59:21 version -- app/version.sh@20 -- # get_header_version suffix 00:06:19.732 20:59:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.732 20:59:21 version -- app/version.sh@14 -- # cut -f2 00:06:19.732 20:59:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.733 20:59:21 version -- app/version.sh@20 -- # suffix=-pre 00:06:19.733 20:59:21 version -- app/version.sh@22 -- # version=25.1 00:06:19.733 20:59:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:19.733 20:59:21 version -- app/version.sh@28 -- # version=25.1rc0 00:06:19.733 20:59:21 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:19.733 20:59:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:19.733 20:59:21 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:19.733 20:59:21 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:19.733 00:06:19.733 real 0m0.279s 00:06:19.733 user 0m0.159s 00:06:19.733 sys 0m0.162s 00:06:19.733 20:59:21 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.733 20:59:21 version -- common/autotest_common.sh@10 -- # set +x 00:06:19.733 ************************************ 00:06:19.733 END TEST version 00:06:19.733 ************************************ 00:06:19.733 20:59:21 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:19.733 20:59:21 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:19.733 20:59:21 -- spdk/autotest.sh@194 -- # uname -s 00:06:19.733 20:59:21 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:19.733 20:59:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:19.733 20:59:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:19.733 20:59:21 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:19.733 20:59:21 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:19.733 20:59:21 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:19.733 20:59:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.733 20:59:21 -- common/autotest_common.sh@10 -- # set +x 00:06:19.733 20:59:21 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:19.733 20:59:21 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:19.733 20:59:21 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:19.733 20:59:21 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:19.733 20:59:21 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:19.733 20:59:21 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:19.733 20:59:21 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:19.733 20:59:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.733 20:59:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.733 20:59:21 -- common/autotest_common.sh@10 -- # set +x 00:06:19.994 ************************************ 00:06:19.994 START TEST nvmf_tcp 00:06:19.994 ************************************ 00:06:19.994 20:59:21 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:19.994 * Looking for test storage... 00:06:19.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:19.994 20:59:21 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.994 20:59:21 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.994 20:59:21 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.994 20:59:21 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.994 20:59:21 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.994 20:59:21 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.994 20:59:21 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.994 20:59:21 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.994 20:59:21 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.995 20:59:21 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:19.995 20:59:21 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.995 20:59:21 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.995 --rc genhtml_branch_coverage=1 00:06:19.995 --rc genhtml_function_coverage=1 00:06:19.995 --rc genhtml_legend=1 00:06:19.995 --rc geninfo_all_blocks=1 00:06:19.995 --rc geninfo_unexecuted_blocks=1 00:06:19.995 00:06:19.995 ' 00:06:19.995 20:59:21 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.995 --rc genhtml_branch_coverage=1 00:06:19.995 --rc genhtml_function_coverage=1 00:06:19.995 --rc genhtml_legend=1 00:06:19.995 --rc geninfo_all_blocks=1 00:06:19.995 --rc geninfo_unexecuted_blocks=1 00:06:19.995 00:06:19.995 ' 00:06:19.995 20:59:21 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.995 --rc genhtml_branch_coverage=1 00:06:19.995 --rc genhtml_function_coverage=1 00:06:19.995 --rc genhtml_legend=1 00:06:19.995 --rc geninfo_all_blocks=1 00:06:19.995 --rc geninfo_unexecuted_blocks=1 00:06:19.995 00:06:19.995 ' 00:06:19.995 20:59:21 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.995 --rc genhtml_branch_coverage=1 00:06:19.995 --rc genhtml_function_coverage=1 00:06:19.995 --rc genhtml_legend=1 00:06:19.995 --rc geninfo_all_blocks=1 00:06:19.995 --rc geninfo_unexecuted_blocks=1 00:06:19.995 00:06:19.995 ' 00:06:19.995 20:59:21 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:19.995 20:59:21 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:19.995 20:59:21 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:19.995 20:59:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.995 20:59:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.995 20:59:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.256 ************************************ 00:06:20.256 START TEST nvmf_target_core 00:06:20.256 ************************************ 00:06:20.256 20:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:20.256 * Looking for test storage... 00:06:20.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:20.256 20:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.257 --rc genhtml_branch_coverage=1 00:06:20.257 --rc genhtml_function_coverage=1 00:06:20.257 --rc genhtml_legend=1 00:06:20.257 --rc geninfo_all_blocks=1 00:06:20.257 --rc geninfo_unexecuted_blocks=1 00:06:20.257 00:06:20.257 ' 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.257 --rc genhtml_branch_coverage=1 00:06:20.257 --rc genhtml_function_coverage=1 00:06:20.257 --rc genhtml_legend=1 00:06:20.257 --rc geninfo_all_blocks=1 00:06:20.257 --rc geninfo_unexecuted_blocks=1 00:06:20.257 00:06:20.257 ' 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.257 --rc genhtml_branch_coverage=1 00:06:20.257 --rc genhtml_function_coverage=1 00:06:20.257 --rc genhtml_legend=1 00:06:20.257 --rc geninfo_all_blocks=1 00:06:20.257 --rc geninfo_unexecuted_blocks=1 00:06:20.257 00:06:20.257 ' 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.257 --rc genhtml_branch_coverage=1 00:06:20.257 --rc genhtml_function_coverage=1 00:06:20.257 --rc genhtml_legend=1 00:06:20.257 --rc geninfo_all_blocks=1 00:06:20.257 --rc geninfo_unexecuted_blocks=1 00:06:20.257 00:06:20.257 ' 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:20.257 ************************************ 00:06:20.257 START TEST nvmf_abort 00:06:20.257 ************************************ 00:06:20.257 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:20.518 * Looking for test storage... 00:06:20.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.518 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.519 --rc genhtml_branch_coverage=1 00:06:20.519 --rc genhtml_function_coverage=1 00:06:20.519 --rc genhtml_legend=1 00:06:20.519 --rc geninfo_all_blocks=1 00:06:20.519 --rc geninfo_unexecuted_blocks=1 00:06:20.519 00:06:20.519 ' 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.519 --rc genhtml_branch_coverage=1 00:06:20.519 --rc genhtml_function_coverage=1 00:06:20.519 --rc genhtml_legend=1 00:06:20.519 --rc geninfo_all_blocks=1 00:06:20.519 --rc geninfo_unexecuted_blocks=1 00:06:20.519 00:06:20.519 ' 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.519 --rc genhtml_branch_coverage=1 00:06:20.519 --rc genhtml_function_coverage=1 00:06:20.519 --rc genhtml_legend=1 00:06:20.519 --rc geninfo_all_blocks=1 00:06:20.519 --rc geninfo_unexecuted_blocks=1 00:06:20.519 00:06:20.519 ' 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.519 --rc genhtml_branch_coverage=1 00:06:20.519 --rc genhtml_function_coverage=1 00:06:20.519 --rc genhtml_legend=1 00:06:20.519 --rc geninfo_all_blocks=1 00:06:20.519 --rc geninfo_unexecuted_blocks=1 00:06:20.519 00:06:20.519 ' 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:20.519 20:59:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:28.653 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:28.653 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.653 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:28.654 Found net devices under 0000:31:00.0: cvl_0_0 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:28.654 Found net devices under 0000:31:00.1: cvl_0_1 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:28.654 20:59:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:28.654 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:28.654 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:28.654 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:28.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:28.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:06:28.916 00:06:28.916 --- 10.0.0.2 ping statistics --- 00:06:28.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.916 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:28.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:28.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:06:28.916 00:06:28.916 --- 10.0.0.1 ping statistics --- 00:06:28.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.916 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.916 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1864119 00:06:28.917 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1864119 00:06:28.917 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:28.917 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1864119 ']' 00:06:28.917 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.917 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.917 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.917 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.917 20:59:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.917 [2024-12-05 20:59:30.326607] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:28.917 [2024-12-05 20:59:30.326657] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.178 [2024-12-05 20:59:30.431874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.178 [2024-12-05 20:59:30.477752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:29.178 [2024-12-05 20:59:30.477804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:29.179 [2024-12-05 20:59:30.477813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:29.179 [2024-12-05 20:59:30.477820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:29.179 [2024-12-05 20:59:30.477826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:29.179 [2024-12-05 20:59:30.479597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.179 [2024-12-05 20:59:30.479759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.179 [2024-12-05 20:59:30.479758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.750 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.750 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:29.750 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:29.750 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.750 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.750 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.750 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:29.750 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.750 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.750 [2024-12-05 20:59:31.165403] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.750 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.750 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:29.750 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.750 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.012 Malloc0 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.012 Delay0 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.012 [2024-12-05 20:59:31.254712] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.012 20:59:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:30.012 [2024-12-05 20:59:31.343942] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:32.559 Initializing NVMe Controllers 00:06:32.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:32.560 controller IO queue size 128 less than required 00:06:32.560 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:32.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:32.560 Initialization complete. Launching workers. 00:06:32.560 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 27407 00:06:32.560 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27469, failed to submit 62 00:06:32.560 success 27411, unsuccessful 58, failed 0 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:32.560 rmmod nvme_tcp 00:06:32.560 rmmod nvme_fabrics 00:06:32.560 rmmod nvme_keyring 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1864119 ']' 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1864119 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1864119 ']' 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1864119 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1864119 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1864119' 00:06:32.560 killing process with pid 1864119 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1864119 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1864119 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:32.560 20:59:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.542 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:34.542 00:06:34.542 real 0m14.128s 00:06:34.542 user 0m13.885s 00:06:34.542 sys 0m7.197s 00:06:34.542 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.542 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:34.542 ************************************ 00:06:34.542 END TEST nvmf_abort 00:06:34.542 ************************************ 00:06:34.542 20:59:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:34.542 20:59:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:34.542 20:59:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.542 20:59:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:34.542 ************************************ 00:06:34.542 START TEST nvmf_ns_hotplug_stress 00:06:34.542 ************************************ 00:06:34.542 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:34.802 * Looking for test storage... 00:06:34.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:34.802 20:59:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:34.802 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:34.802 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:34.802 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:34.802 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.802 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.802 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.802 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.802 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.802 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:34.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.803 --rc genhtml_branch_coverage=1 00:06:34.803 --rc genhtml_function_coverage=1 00:06:34.803 --rc genhtml_legend=1 00:06:34.803 --rc geninfo_all_blocks=1 00:06:34.803 --rc geninfo_unexecuted_blocks=1 00:06:34.803 00:06:34.803 ' 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:34.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.803 --rc genhtml_branch_coverage=1 00:06:34.803 --rc genhtml_function_coverage=1 00:06:34.803 --rc genhtml_legend=1 00:06:34.803 --rc geninfo_all_blocks=1 00:06:34.803 --rc geninfo_unexecuted_blocks=1 00:06:34.803 00:06:34.803 ' 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:34.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.803 --rc genhtml_branch_coverage=1 00:06:34.803 --rc genhtml_function_coverage=1 00:06:34.803 --rc genhtml_legend=1 00:06:34.803 --rc geninfo_all_blocks=1 00:06:34.803 --rc geninfo_unexecuted_blocks=1 00:06:34.803 00:06:34.803 ' 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:34.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.803 --rc genhtml_branch_coverage=1 00:06:34.803 --rc genhtml_function_coverage=1 00:06:34.803 --rc genhtml_legend=1 00:06:34.803 --rc geninfo_all_blocks=1 00:06:34.803 --rc geninfo_unexecuted_blocks=1 00:06:34.803 00:06:34.803 ' 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:34.803 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:34.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:34.804 20:59:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:42.940 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:42.940 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.940 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:42.941 Found net devices under 0000:31:00.0: cvl_0_0 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:42.941 Found net devices under 0000:31:00.1: cvl_0_1 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.941 20:59:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:42.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:06:42.941 00:06:42.941 --- 10.0.0.2 ping statistics --- 00:06:42.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.941 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:06:42.941 00:06:42.941 --- 10.0.0.1 ping statistics --- 00:06:42.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.941 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1869526 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1869526 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1869526 ']' 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.941 20:59:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:42.941 [2024-12-05 20:59:44.284895] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:06:42.941 [2024-12-05 20:59:44.284984] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.330 [2024-12-05 20:59:44.394485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.330 [2024-12-05 20:59:44.445658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.330 [2024-12-05 20:59:44.445713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.330 [2024-12-05 20:59:44.445721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.330 [2024-12-05 20:59:44.445728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.330 [2024-12-05 20:59:44.445735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.330 [2024-12-05 20:59:44.447587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.330 [2024-12-05 20:59:44.447753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.330 [2024-12-05 20:59:44.447753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.902 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.902 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:43.902 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:43.902 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.902 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:43.902 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.902 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:43.902 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:43.902 [2024-12-05 20:59:45.288532] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.902 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:44.162 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:44.423 [2024-12-05 20:59:45.649958] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:44.423 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:44.683 20:59:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:44.683 Malloc0 00:06:44.683 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:44.944 Delay0 00:06:44.944 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.204 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:45.204 NULL1 00:06:45.204 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:45.464 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:45.464 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1869922 00:06:45.464 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:06:45.464 20:59:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.860 Read completed with error (sct=0, sc=11) 00:06:46.860 20:59:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.860 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:46.860 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:47.123 true 00:06:47.123 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:06:47.123 20:59:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.065 20:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.065 20:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:48.065 20:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:48.065 true 00:06:48.328 20:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:06:48.328 20:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.328 20:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.589 20:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:48.589 20:59:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:48.589 true 00:06:48.850 20:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:06:48.850 20:59:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.793 20:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.054 20:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:50.054 20:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:50.315 true 00:06:50.315 20:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:06:50.315 20:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.257 20:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.257 20:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:51.257 20:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:51.519 true 00:06:51.519 20:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:06:51.519 20:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.519 20:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.780 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:51.780 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:52.041 true 00:06:52.041 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:06:52.041 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.041 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.302 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:52.302 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:52.563 true 00:06:52.563 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:06:52.563 20:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.823 20:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.823 20:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:52.823 20:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:53.084 true 00:06:53.084 20:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:06:53.084 20:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.466 20:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.466 20:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:54.466 20:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:54.466 true 00:06:54.466 20:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:06:54.466 20:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.406 20:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.666 20:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:55.666 20:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:55.666 true 00:06:55.666 20:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:06:55.666 20:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.927 20:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.188 20:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:56.188 20:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:56.188 true 00:06:56.188 20:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:06:56.188 20:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.575 20:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.575 20:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:57.575 20:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:57.837 true 00:06:57.837 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:06:57.837 20:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.781 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.781 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:58.782 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:59.042 true 00:06:59.042 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:06:59.042 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.303 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.600 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:59.600 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:59.600 true 00:06:59.600 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:06:59.600 21:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.859 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.859 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:59.859 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:00.120 true 00:07:00.120 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:00.120 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.381 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.643 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:00.643 21:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:00.643 true 00:07:00.643 21:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:00.643 21:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.031 21:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.031 21:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:02.031 21:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:02.292 true 00:07:02.292 21:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:02.292 21:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.232 21:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.232 21:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:03.232 21:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:03.492 true 00:07:03.492 21:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:03.492 21:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.492 21:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.752 21:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:03.752 21:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:04.011 true 00:07:04.011 21:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:04.011 21:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.950 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.210 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.210 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.210 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.210 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.210 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:05.210 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:05.471 true 00:07:05.471 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:05.471 21:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.414 21:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.414 21:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:06.414 21:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:06.675 true 00:07:06.675 21:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:06.675 21:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.675 21:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.937 21:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:06.937 21:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:07.197 true 00:07:07.197 21:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:07.197 21:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.585 21:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.585 21:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:08.585 21:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:08.585 true 00:07:08.585 21:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:08.585 21:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.527 21:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.788 21:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:09.788 21:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:09.788 true 00:07:09.788 21:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:09.788 21:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.049 21:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.311 21:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:10.311 21:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:10.311 true 00:07:10.311 21:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:10.311 21:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.572 21:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.833 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:10.833 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:10.833 true 00:07:10.833 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:10.833 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.093 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.354 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:11.354 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:11.354 true 00:07:11.354 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:11.354 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.615 21:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.876 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:11.876 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:11.876 true 00:07:11.876 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:11.876 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.137 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.398 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:12.398 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:12.398 true 00:07:12.398 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:12.398 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.660 21:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.921 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:12.921 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:12.921 true 00:07:12.921 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:12.921 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.181 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.441 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:13.441 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:13.441 true 00:07:13.441 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:13.441 21:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.702 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.985 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:13.985 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:13.985 true 00:07:13.985 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:13.985 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.245 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.505 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:14.505 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:14.505 true 00:07:14.505 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:14.505 21:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.765 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.027 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:15.027 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:15.027 true 00:07:15.287 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:15.287 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.287 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.547 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:15.547 21:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:15.808 true 00:07:15.808 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:15.808 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.808 Initializing NVMe Controllers 00:07:15.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:15.808 Controller IO queue size 128, less than required. 00:07:15.808 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:15.808 Controller IO queue size 128, less than required. 00:07:15.808 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:15.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:15.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:15.808 Initialization complete. Launching workers. 00:07:15.808 ======================================================== 00:07:15.808 Latency(us) 00:07:15.808 Device Information : IOPS MiB/s Average min max 00:07:15.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1760.47 0.86 38709.15 2169.55 1123147.03 00:07:15.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15828.15 7.73 8086.45 1401.00 529793.80 00:07:15.808 ======================================================== 00:07:15.808 Total : 17588.61 8.59 11151.52 1401.00 1123147.03 00:07:15.808 00:07:15.808 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.069 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:16.069 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:16.329 true 00:07:16.329 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1869922 00:07:16.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1869922) - No such process 00:07:16.329 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1869922 00:07:16.329 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.329 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:16.589 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:16.589 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:16.589 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:16.589 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.589 21:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:16.848 null0 00:07:16.848 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.848 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.848 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:16.848 null1 00:07:16.848 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.848 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.848 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:17.109 null2 00:07:17.109 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.109 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.109 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:17.369 null3 00:07:17.369 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.369 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.369 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:17.369 null4 00:07:17.630 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.630 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.630 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:17.630 null5 00:07:17.630 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.630 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.630 21:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:17.891 null6 00:07:17.891 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.891 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.891 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:17.891 null7 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.153 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1877207 1877209 1877212 1877216 1877219 1877222 1877225 1877227 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.154 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.415 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.676 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.676 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.676 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.676 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.676 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.676 21:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.676 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.676 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.676 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.676 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.676 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.936 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.936 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.936 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.936 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.936 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.936 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.936 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.936 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.936 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.936 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.937 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.197 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.458 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.458 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.458 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.459 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.718 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.718 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.718 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.718 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.718 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.718 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.718 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.718 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.718 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.718 21:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.718 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.718 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.718 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.718 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.719 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.719 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.719 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.719 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.979 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.239 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.239 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.239 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.239 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.239 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.239 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.239 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.239 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.239 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.239 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.239 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.240 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.240 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.240 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.240 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.240 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.240 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.240 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.240 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.240 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.240 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.500 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.760 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.760 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.760 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.760 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.760 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.760 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.760 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.760 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.760 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.760 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.760 21:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.760 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.760 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.760 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.760 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.760 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.760 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.760 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.760 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.760 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.761 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.761 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.761 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.761 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.761 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.761 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.761 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.761 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.021 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.022 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.022 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.022 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.022 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.282 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.543 21:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.804 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.804 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.804 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.804 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.804 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.804 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.804 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.804 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.804 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.804 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.804 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.804 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:22.064 rmmod nvme_tcp 00:07:22.064 rmmod nvme_fabrics 00:07:22.064 rmmod nvme_keyring 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1869526 ']' 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1869526 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1869526 ']' 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1869526 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1869526 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1869526' 00:07:22.064 killing process with pid 1869526 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1869526 00:07:22.064 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1869526 00:07:22.325 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:22.325 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:22.325 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:22.325 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:22.325 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:22.325 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:22.325 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:22.325 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:22.325 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:22.325 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.325 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.325 21:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.239 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:24.239 00:07:24.239 real 0m49.727s 00:07:24.239 user 3m14.623s 00:07:24.239 sys 0m16.533s 00:07:24.239 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.239 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:24.239 ************************************ 00:07:24.239 END TEST nvmf_ns_hotplug_stress 00:07:24.239 ************************************ 00:07:24.239 21:00:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:24.239 21:00:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:24.239 21:00:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.239 21:00:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:24.501 ************************************ 00:07:24.501 START TEST nvmf_delete_subsystem 00:07:24.501 ************************************ 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:24.501 * Looking for test storage... 00:07:24.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:24.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.501 --rc genhtml_branch_coverage=1 00:07:24.501 --rc genhtml_function_coverage=1 00:07:24.501 --rc genhtml_legend=1 00:07:24.501 --rc geninfo_all_blocks=1 00:07:24.501 --rc geninfo_unexecuted_blocks=1 00:07:24.501 00:07:24.501 ' 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:24.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.501 --rc genhtml_branch_coverage=1 00:07:24.501 --rc genhtml_function_coverage=1 00:07:24.501 --rc genhtml_legend=1 00:07:24.501 --rc geninfo_all_blocks=1 00:07:24.501 --rc geninfo_unexecuted_blocks=1 00:07:24.501 00:07:24.501 ' 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:24.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.501 --rc genhtml_branch_coverage=1 00:07:24.501 --rc genhtml_function_coverage=1 00:07:24.501 --rc genhtml_legend=1 00:07:24.501 --rc geninfo_all_blocks=1 00:07:24.501 --rc geninfo_unexecuted_blocks=1 00:07:24.501 00:07:24.501 ' 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:24.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.501 --rc genhtml_branch_coverage=1 00:07:24.501 --rc genhtml_function_coverage=1 00:07:24.501 --rc genhtml_legend=1 00:07:24.501 --rc geninfo_all_blocks=1 00:07:24.501 --rc geninfo_unexecuted_blocks=1 00:07:24.501 00:07:24.501 ' 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.501 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:24.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:24.502 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:24.764 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:24.764 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:24.764 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.764 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:24.764 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:24.764 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:24.764 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.764 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.764 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.764 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:24.764 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:24.764 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:24.764 21:00:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.906 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.906 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:32.906 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:32.906 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:32.907 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:32.907 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:32.907 Found net devices under 0000:31:00.0: cvl_0_0 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:32.907 Found net devices under 0000:31:00.1: cvl_0_1 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:32.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:07:32.907 00:07:32.907 --- 10.0.0.2 ping statistics --- 00:07:32.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.907 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:07:32.907 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:07:32.907 00:07:32.907 --- 10.0.0.1 ping statistics --- 00:07:32.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.907 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:07:32.908 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.908 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:32.908 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:32.908 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.908 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:32.908 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:32.908 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.908 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:32.908 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:32.908 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:32.908 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:32.908 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.908 21:00:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.908 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1882811 00:07:32.908 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1882811 00:07:32.908 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:32.908 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1882811 ']' 00:07:32.908 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.908 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.908 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.908 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.908 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.908 [2024-12-05 21:00:34.057965] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:32.908 [2024-12-05 21:00:34.058016] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.908 [2024-12-05 21:00:34.143790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.908 [2024-12-05 21:00:34.178588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.908 [2024-12-05 21:00:34.178623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.908 [2024-12-05 21:00:34.178631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.908 [2024-12-05 21:00:34.178637] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.908 [2024-12-05 21:00:34.178643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.908 [2024-12-05 21:00:34.179899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.908 [2024-12-05 21:00:34.179928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.478 [2024-12-05 21:00:34.885838] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.478 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.478 [2024-12-05 21:00:34.910066] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.737 NULL1 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.737 Delay0 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1883160 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:33.737 21:00:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:33.737 [2024-12-05 21:00:35.016898] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:35.641 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.641 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.641 21:00:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 [2024-12-05 21:00:37.141895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d4f00 is same with the state(6) to be set 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 [2024-12-05 21:00:37.145103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f750000d4b0 is same with the state(6) to be set 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.901 Write completed with error (sct=0, sc=8) 00:07:35.901 Read completed with error (sct=0, sc=8) 00:07:35.901 starting I/O failed: -6 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Write completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Write completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 Write completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Write completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Write completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 Write completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Write completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Write completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 Write completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 Write completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 Read completed with error (sct=0, sc=8) 00:07:35.902 starting I/O failed: -6 00:07:35.902 [2024-12-05 21:00:37.145564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7500000c40 is same with the state(6) to be set 00:07:36.842 [2024-12-05 21:00:38.117175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d65f0 is same with the state(6) to be set 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 [2024-12-05 21:00:38.145296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d50e0 is same with the state(6) to be set 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 [2024-12-05 21:00:38.145901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d54a0 is same with the state(6) to be set 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Write completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.842 Read completed with error (sct=0, sc=8) 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 [2024-12-05 21:00:38.147707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f750000d020 is same with the state(6) to be set 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 Write completed with error (sct=0, sc=8) 00:07:36.843 Read completed with error (sct=0, sc=8) 00:07:36.843 [2024-12-05 21:00:38.147830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f750000d7e0 is same with the state(6) to be set 00:07:36.843 Initializing NVMe Controllers 00:07:36.843 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:36.843 Controller IO queue size 128, less than required. 00:07:36.843 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:36.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:36.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:36.843 Initialization complete. Launching workers. 00:07:36.843 ======================================================== 00:07:36.843 Latency(us) 00:07:36.843 Device Information : IOPS MiB/s Average min max 00:07:36.843 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.26 0.09 879575.05 257.49 1006746.51 00:07:36.843 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 174.28 0.09 929622.79 478.16 2002173.46 00:07:36.843 ======================================================== 00:07:36.843 Total : 351.54 0.17 904386.25 257.49 2002173.46 00:07:36.843 00:07:36.843 [2024-12-05 21:00:38.148463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d65f0 (9): Bad file descriptor 00:07:36.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:36.843 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.843 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:36.843 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1883160 00:07:36.843 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1883160 00:07:37.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1883160) - No such process 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1883160 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1883160 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1883160 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.414 [2024-12-05 21:00:38.677730] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.414 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.415 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.415 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1883846 00:07:37.415 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:37.415 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:37.415 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1883846 00:07:37.415 21:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.415 [2024-12-05 21:00:38.758047] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:37.985 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:37.985 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1883846 00:07:37.985 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:38.556 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:38.556 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1883846 00:07:38.556 21:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:38.817 21:00:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:38.817 21:00:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1883846 00:07:38.817 21:00:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.387 21:00:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.387 21:00:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1883846 00:07:39.387 21:00:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.959 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.959 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1883846 00:07:39.959 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.530 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.530 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1883846 00:07:40.530 21:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.530 Initializing NVMe Controllers 00:07:40.530 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:40.530 Controller IO queue size 128, less than required. 00:07:40.530 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:40.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:40.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:40.530 Initialization complete. Launching workers. 00:07:40.530 ======================================================== 00:07:40.530 Latency(us) 00:07:40.530 Device Information : IOPS MiB/s Average min max 00:07:40.530 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001957.12 1000122.70 1007475.61 00:07:40.530 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002876.44 1000215.42 1009732.56 00:07:40.530 ======================================================== 00:07:40.530 Total : 256.00 0.12 1002416.78 1000122.70 1009732.56 00:07:40.530 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1883846 00:07:41.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1883846) - No such process 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1883846 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:41.103 rmmod nvme_tcp 00:07:41.103 rmmod nvme_fabrics 00:07:41.103 rmmod nvme_keyring 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1882811 ']' 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1882811 00:07:41.103 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1882811 ']' 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1882811 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1882811 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1882811' 00:07:41.104 killing process with pid 1882811 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1882811 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1882811 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.104 21:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.651 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:43.651 00:07:43.651 real 0m18.874s 00:07:43.651 user 0m30.744s 00:07:43.651 sys 0m7.191s 00:07:43.651 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.651 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.651 ************************************ 00:07:43.651 END TEST nvmf_delete_subsystem 00:07:43.651 ************************************ 00:07:43.651 21:00:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:43.651 21:00:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:43.651 21:00:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.651 21:00:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:43.651 ************************************ 00:07:43.651 START TEST nvmf_host_management 00:07:43.651 ************************************ 00:07:43.651 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:43.651 * Looking for test storage... 00:07:43.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.651 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:43.651 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:43.651 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:43.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.652 --rc genhtml_branch_coverage=1 00:07:43.652 --rc genhtml_function_coverage=1 00:07:43.652 --rc genhtml_legend=1 00:07:43.652 --rc geninfo_all_blocks=1 00:07:43.652 --rc geninfo_unexecuted_blocks=1 00:07:43.652 00:07:43.652 ' 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:43.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.652 --rc genhtml_branch_coverage=1 00:07:43.652 --rc genhtml_function_coverage=1 00:07:43.652 --rc genhtml_legend=1 00:07:43.652 --rc geninfo_all_blocks=1 00:07:43.652 --rc geninfo_unexecuted_blocks=1 00:07:43.652 00:07:43.652 ' 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:43.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.652 --rc genhtml_branch_coverage=1 00:07:43.652 --rc genhtml_function_coverage=1 00:07:43.652 --rc genhtml_legend=1 00:07:43.652 --rc geninfo_all_blocks=1 00:07:43.652 --rc geninfo_unexecuted_blocks=1 00:07:43.652 00:07:43.652 ' 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:43.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.652 --rc genhtml_branch_coverage=1 00:07:43.652 --rc genhtml_function_coverage=1 00:07:43.652 --rc genhtml_legend=1 00:07:43.652 --rc geninfo_all_blocks=1 00:07:43.652 --rc geninfo_unexecuted_blocks=1 00:07:43.652 00:07:43.652 ' 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:43.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:43.652 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:43.653 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:43.653 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:43.653 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:43.653 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:43.653 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.653 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:43.653 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:43.653 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:43.653 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.653 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.653 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.653 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:43.653 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:43.653 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:43.653 21:00:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:51.791 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:51.791 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:51.791 Found net devices under 0000:31:00.0: cvl_0_0 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:51.791 Found net devices under 0000:31:00.1: cvl_0_1 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:51.791 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:51.792 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:51.792 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.792 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.792 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.792 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:51.792 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.792 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.792 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:51.792 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:51.792 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.792 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.792 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:51.792 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:51.792 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.792 21:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:51.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:07:51.792 00:07:51.792 --- 10.0.0.2 ping statistics --- 00:07:51.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.792 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:07:51.792 00:07:51.792 --- 10.0.0.1 ping statistics --- 00:07:51.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.792 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:51.792 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.054 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:52.054 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:52.054 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:52.054 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.054 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.054 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.054 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1889326 00:07:52.054 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1889326 00:07:52.054 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:52.054 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1889326 ']' 00:07:52.054 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.054 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.054 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.054 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.054 21:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.054 [2024-12-05 21:00:53.307032] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:52.054 [2024-12-05 21:00:53.307100] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.054 [2024-12-05 21:00:53.416847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.054 [2024-12-05 21:00:53.469574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.054 [2024-12-05 21:00:53.469629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.054 [2024-12-05 21:00:53.469638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.054 [2024-12-05 21:00:53.469645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.054 [2024-12-05 21:00:53.469651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.054 [2024-12-05 21:00:53.471730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.054 [2024-12-05 21:00:53.471775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.054 [2024-12-05 21:00:53.471922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:52.054 [2024-12-05 21:00:53.471923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.998 [2024-12-05 21:00:54.173360] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.998 Malloc0 00:07:52.998 [2024-12-05 21:00:54.244144] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1889604 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1889604 /var/tmp/bdevperf.sock 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1889604 ']' 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:52.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:52.998 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:52.998 { 00:07:52.998 "params": { 00:07:52.998 "name": "Nvme$subsystem", 00:07:52.998 "trtype": "$TEST_TRANSPORT", 00:07:52.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:52.998 "adrfam": "ipv4", 00:07:52.998 "trsvcid": "$NVMF_PORT", 00:07:52.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:52.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:52.998 "hdgst": ${hdgst:-false}, 00:07:52.998 "ddgst": ${ddgst:-false} 00:07:52.998 }, 00:07:52.999 "method": "bdev_nvme_attach_controller" 00:07:52.999 } 00:07:52.999 EOF 00:07:52.999 )") 00:07:52.999 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:52.999 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:52.999 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:52.999 21:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:52.999 "params": { 00:07:52.999 "name": "Nvme0", 00:07:52.999 "trtype": "tcp", 00:07:52.999 "traddr": "10.0.0.2", 00:07:52.999 "adrfam": "ipv4", 00:07:52.999 "trsvcid": "4420", 00:07:52.999 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:52.999 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:52.999 "hdgst": false, 00:07:52.999 "ddgst": false 00:07:52.999 }, 00:07:52.999 "method": "bdev_nvme_attach_controller" 00:07:52.999 }' 00:07:52.999 [2024-12-05 21:00:54.346612] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:52.999 [2024-12-05 21:00:54.346665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889604 ] 00:07:52.999 [2024-12-05 21:00:54.424513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.258 [2024-12-05 21:00:54.460790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.258 Running I/O for 10 seconds... 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.831 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.831 [2024-12-05 21:00:55.212447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.831 [2024-12-05 21:00:55.212603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.212877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e7d40 is same with the state(6) to be set 00:07:53.832 [2024-12-05 21:00:55.213154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.213983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.213990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.214003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.214012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.214022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.214030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.214039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.214046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.214056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.214065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.214075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.214083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.832 [2024-12-05 21:00:55.214092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.832 [2024-12-05 21:00:55.214099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.833 [2024-12-05 21:00:55.214110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.833 [2024-12-05 21:00:55.214118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.833 [2024-12-05 21:00:55.214128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.833 [2024-12-05 21:00:55.214135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.833 [2024-12-05 21:00:55.214146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.833 [2024-12-05 21:00:55.214153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.833 [2024-12-05 21:00:55.214162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.833 [2024-12-05 21:00:55.214170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.833 [2024-12-05 21:00:55.214181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.833 [2024-12-05 21:00:55.214188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.833 [2024-12-05 21:00:55.214197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.833 [2024-12-05 21:00:55.214204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.833 [2024-12-05 21:00:55.214215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.833 [2024-12-05 21:00:55.214224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.833 [2024-12-05 21:00:55.214234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.833 [2024-12-05 21:00:55.214242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.833 [2024-12-05 21:00:55.214251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.833 [2024-12-05 21:00:55.214259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.833 [2024-12-05 21:00:55.214268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.833 [2024-12-05 21:00:55.214276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.833 [2024-12-05 21:00:55.214287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.833 [2024-12-05 21:00:55.214294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.833 [2024-12-05 21:00:55.214303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6feb0 is same with the state(6) to be set 00:07:53.833 [2024-12-05 21:00:55.215564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:53.833 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.833 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:53.833 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.833 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.833 task offset: 98944 on job bdev=Nvme0n1 fails 00:07:53.833 00:07:53.833 Latency(us) 00:07:53.833 [2024-12-05T20:00:55.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.833 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:53.833 Job: Nvme0n1 ended in about 0.52 seconds with error 00:07:53.833 Verification LBA range: start 0x0 length 0x400 00:07:53.833 Nvme0n1 : 0.52 1478.00 92.38 122.37 0.00 38975.19 4041.39 33423.36 00:07:53.833 [2024-12-05T20:00:55.270Z] =================================================================================================================== 00:07:53.833 [2024-12-05T20:00:55.270Z] Total : 1478.00 92.38 122.37 0.00 38975.19 4041.39 33423.36 00:07:53.833 [2024-12-05 21:00:55.217578] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:53.833 [2024-12-05 21:00:55.217601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5fb10 (9): Bad file descriptor 00:07:53.833 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.833 21:00:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:54.093 [2024-12-05 21:00:55.272312] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:55.033 21:00:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1889604 00:07:55.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1889604) - No such process 00:07:55.033 21:00:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:55.033 21:00:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:55.033 21:00:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:55.033 21:00:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:55.033 21:00:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:55.033 21:00:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:55.033 21:00:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:55.033 21:00:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:55.033 { 00:07:55.033 "params": { 00:07:55.033 "name": "Nvme$subsystem", 00:07:55.033 "trtype": "$TEST_TRANSPORT", 00:07:55.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.033 "adrfam": "ipv4", 00:07:55.033 "trsvcid": "$NVMF_PORT", 00:07:55.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.033 "hdgst": ${hdgst:-false}, 00:07:55.033 "ddgst": ${ddgst:-false} 00:07:55.033 }, 00:07:55.033 "method": "bdev_nvme_attach_controller" 00:07:55.033 } 00:07:55.033 EOF 00:07:55.033 )") 00:07:55.033 21:00:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:55.033 21:00:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:55.033 21:00:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:55.033 21:00:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:55.033 "params": { 00:07:55.033 "name": "Nvme0", 00:07:55.033 "trtype": "tcp", 00:07:55.033 "traddr": "10.0.0.2", 00:07:55.033 "adrfam": "ipv4", 00:07:55.033 "trsvcid": "4420", 00:07:55.033 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:55.033 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:55.033 "hdgst": false, 00:07:55.033 "ddgst": false 00:07:55.033 }, 00:07:55.033 "method": "bdev_nvme_attach_controller" 00:07:55.033 }' 00:07:55.033 [2024-12-05 21:00:56.286846] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:07:55.033 [2024-12-05 21:00:56.286905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889960 ] 00:07:55.033 [2024-12-05 21:00:56.365030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.033 [2024-12-05 21:00:56.400171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.293 Running I/O for 1 seconds... 00:07:56.234 2073.00 IOPS, 129.56 MiB/s 00:07:56.234 Latency(us) 00:07:56.234 [2024-12-05T20:00:57.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.234 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:56.234 Verification LBA range: start 0x0 length 0x400 00:07:56.234 Nvme0n1 : 1.01 2110.40 131.90 0.00 0.00 29670.29 1556.48 29272.75 00:07:56.234 [2024-12-05T20:00:57.671Z] =================================================================================================================== 00:07:56.234 [2024-12-05T20:00:57.671Z] Total : 2110.40 131.90 0.00 0.00 29670.29 1556.48 29272.75 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:56.495 rmmod nvme_tcp 00:07:56.495 rmmod nvme_fabrics 00:07:56.495 rmmod nvme_keyring 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1889326 ']' 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1889326 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1889326 ']' 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1889326 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1889326 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1889326' 00:07:56.495 killing process with pid 1889326 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1889326 00:07:56.495 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1889326 00:07:56.756 [2024-12-05 21:00:57.961095] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:56.756 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:56.756 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:56.756 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:56.756 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:56.756 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:56.756 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:56.756 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:56.756 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:56.756 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:56.756 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.756 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.756 21:00:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.669 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:58.670 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:58.670 00:07:58.670 real 0m15.403s 00:07:58.670 user 0m22.942s 00:07:58.670 sys 0m7.355s 00:07:58.670 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.670 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.670 ************************************ 00:07:58.670 END TEST nvmf_host_management 00:07:58.670 ************************************ 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.932 ************************************ 00:07:58.932 START TEST nvmf_lvol 00:07:58.932 ************************************ 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:58.932 * Looking for test storage... 00:07:58.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:58.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.932 --rc genhtml_branch_coverage=1 00:07:58.932 --rc genhtml_function_coverage=1 00:07:58.932 --rc genhtml_legend=1 00:07:58.932 --rc geninfo_all_blocks=1 00:07:58.932 --rc geninfo_unexecuted_blocks=1 00:07:58.932 00:07:58.932 ' 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:58.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.932 --rc genhtml_branch_coverage=1 00:07:58.932 --rc genhtml_function_coverage=1 00:07:58.932 --rc genhtml_legend=1 00:07:58.932 --rc geninfo_all_blocks=1 00:07:58.932 --rc geninfo_unexecuted_blocks=1 00:07:58.932 00:07:58.932 ' 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:58.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.932 --rc genhtml_branch_coverage=1 00:07:58.932 --rc genhtml_function_coverage=1 00:07:58.932 --rc genhtml_legend=1 00:07:58.932 --rc geninfo_all_blocks=1 00:07:58.932 --rc geninfo_unexecuted_blocks=1 00:07:58.932 00:07:58.932 ' 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:58.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.932 --rc genhtml_branch_coverage=1 00:07:58.932 --rc genhtml_function_coverage=1 00:07:58.932 --rc genhtml_legend=1 00:07:58.932 --rc geninfo_all_blocks=1 00:07:58.932 --rc geninfo_unexecuted_blocks=1 00:07:58.932 00:07:58.932 ' 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.932 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:59.195 21:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:07.345 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:07.345 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:07.345 Found net devices under 0000:31:00.0: cvl_0_0 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:07.345 Found net devices under 0000:31:00.1: cvl_0_1 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.345 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:07.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:08:07.346 00:08:07.346 --- 10.0.0.2 ping statistics --- 00:08:07.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.346 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:08:07.346 00:08:07.346 --- 10.0.0.1 ping statistics --- 00:08:07.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.346 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1894994 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1894994 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1894994 ']' 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:07.346 21:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:07.346 [2024-12-05 21:01:08.510786] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:07.346 [2024-12-05 21:01:08.510854] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.346 [2024-12-05 21:01:08.603596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:07.346 [2024-12-05 21:01:08.645932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.346 [2024-12-05 21:01:08.645969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.346 [2024-12-05 21:01:08.645978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.346 [2024-12-05 21:01:08.645984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.346 [2024-12-05 21:01:08.645990] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.346 [2024-12-05 21:01:08.647431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.346 [2024-12-05 21:01:08.647547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.346 [2024-12-05 21:01:08.647549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.917 21:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.917 21:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:07.917 21:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:07.917 21:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:07.917 21:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.178 21:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.178 21:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:08.178 [2024-12-05 21:01:09.511634] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.178 21:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:08.439 21:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:08.439 21:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:08.700 21:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:08.700 21:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:08.700 21:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:08.960 21:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=93b67e05-b2b9-4e8f-8f88-27737d13c44d 00:08:08.960 21:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 93b67e05-b2b9-4e8f-8f88-27737d13c44d lvol 20 00:08:09.262 21:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=340c73eb-4742-48ff-8920-54f29544e51f 00:08:09.262 21:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:09.622 21:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 340c73eb-4742-48ff-8920-54f29544e51f 00:08:09.622 21:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:09.622 [2024-12-05 21:01:10.989873] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.622 21:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.923 21:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1895702 00:08:09.923 21:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:09.923 21:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:10.919 21:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 340c73eb-4742-48ff-8920-54f29544e51f MY_SNAPSHOT 00:08:11.180 21:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6d87f15c-5ef5-452f-ae91-3553ed873e74 00:08:11.180 21:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 340c73eb-4742-48ff-8920-54f29544e51f 30 00:08:11.440 21:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6d87f15c-5ef5-452f-ae91-3553ed873e74 MY_CLONE 00:08:11.440 21:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=312069de-bd11-4a63-8d9f-39fc0e5d201e 00:08:11.440 21:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 312069de-bd11-4a63-8d9f-39fc0e5d201e 00:08:12.012 21:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1895702 00:08:20.151 Initializing NVMe Controllers 00:08:20.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:20.151 Controller IO queue size 128, less than required. 00:08:20.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:20.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:20.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:20.151 Initialization complete. Launching workers. 00:08:20.151 ======================================================== 00:08:20.151 Latency(us) 00:08:20.151 Device Information : IOPS MiB/s Average min max 00:08:20.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16864.80 65.88 7592.84 1133.63 58263.37 00:08:20.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12218.90 47.73 10477.17 2572.94 60122.19 00:08:20.151 ======================================================== 00:08:20.151 Total : 29083.70 113.61 8804.63 1133.63 60122.19 00:08:20.151 00:08:20.151 21:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:20.412 21:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 340c73eb-4742-48ff-8920-54f29544e51f 00:08:20.672 21:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 93b67e05-b2b9-4e8f-8f88-27737d13c44d 00:08:20.672 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:20.672 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:20.672 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:20.672 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.672 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:20.672 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.672 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:20.672 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.672 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.672 rmmod nvme_tcp 00:08:20.933 rmmod nvme_fabrics 00:08:20.933 rmmod nvme_keyring 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1894994 ']' 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1894994 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1894994 ']' 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1894994 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1894994 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1894994' 00:08:20.933 killing process with pid 1894994 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1894994 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1894994 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:20.933 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:21.193 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:21.193 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:21.193 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.193 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.193 21:01:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.102 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:23.102 00:08:23.102 real 0m24.297s 00:08:23.102 user 1m4.334s 00:08:23.102 sys 0m8.896s 00:08:23.102 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.102 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:23.102 ************************************ 00:08:23.102 END TEST nvmf_lvol 00:08:23.102 ************************************ 00:08:23.102 21:01:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:23.102 21:01:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:23.102 21:01:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.102 21:01:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:23.102 ************************************ 00:08:23.102 START TEST nvmf_lvs_grow 00:08:23.102 ************************************ 00:08:23.103 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:23.364 * Looking for test storage... 00:08:23.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:23.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.364 --rc genhtml_branch_coverage=1 00:08:23.364 --rc genhtml_function_coverage=1 00:08:23.364 --rc genhtml_legend=1 00:08:23.364 --rc geninfo_all_blocks=1 00:08:23.364 --rc geninfo_unexecuted_blocks=1 00:08:23.364 00:08:23.364 ' 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:23.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.364 --rc genhtml_branch_coverage=1 00:08:23.364 --rc genhtml_function_coverage=1 00:08:23.364 --rc genhtml_legend=1 00:08:23.364 --rc geninfo_all_blocks=1 00:08:23.364 --rc geninfo_unexecuted_blocks=1 00:08:23.364 00:08:23.364 ' 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:23.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.364 --rc genhtml_branch_coverage=1 00:08:23.364 --rc genhtml_function_coverage=1 00:08:23.364 --rc genhtml_legend=1 00:08:23.364 --rc geninfo_all_blocks=1 00:08:23.364 --rc geninfo_unexecuted_blocks=1 00:08:23.364 00:08:23.364 ' 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:23.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.364 --rc genhtml_branch_coverage=1 00:08:23.364 --rc genhtml_function_coverage=1 00:08:23.364 --rc genhtml_legend=1 00:08:23.364 --rc geninfo_all_blocks=1 00:08:23.364 --rc geninfo_unexecuted_blocks=1 00:08:23.364 00:08:23.364 ' 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:23.364 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:23.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:23.365 21:01:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:31.501 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:31.501 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:31.501 Found net devices under 0000:31:00.0: cvl_0_0 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.501 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:31.501 Found net devices under 0000:31:00.1: cvl_0_1 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.502 21:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.763 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.763 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.763 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:31.763 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.763 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.763 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.763 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:31.763 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:31.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:08:31.763 00:08:31.763 --- 10.0.0.2 ping statistics --- 00:08:31.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.763 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:08:31.763 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:08:31.763 00:08:31.763 --- 10.0.0.1 ping statistics --- 00:08:31.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.763 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:08:31.763 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.763 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:31.763 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:31.763 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.023 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1902775 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1902775 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1902775 ']' 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.024 21:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.024 [2024-12-05 21:01:33.289395] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:32.024 [2024-12-05 21:01:33.289446] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.024 [2024-12-05 21:01:33.373121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.024 [2024-12-05 21:01:33.408443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.024 [2024-12-05 21:01:33.408476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.024 [2024-12-05 21:01:33.408484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.024 [2024-12-05 21:01:33.408491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.024 [2024-12-05 21:01:33.408497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.024 [2024-12-05 21:01:33.409082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:32.992 [2024-12-05 21:01:34.286745] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.992 ************************************ 00:08:32.992 START TEST lvs_grow_clean 00:08:32.992 ************************************ 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.992 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:33.254 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:33.254 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:33.514 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=831203ce-c6ca-444f-89e7-edc7081ad0bd 00:08:33.514 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 831203ce-c6ca-444f-89e7-edc7081ad0bd 00:08:33.514 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:33.514 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:33.514 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:33.514 21:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 831203ce-c6ca-444f-89e7-edc7081ad0bd lvol 150 00:08:33.774 21:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c3995296-bcee-4103-81ef-df3a8a82c2d7 00:08:33.774 21:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:33.774 21:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:33.774 [2024-12-05 21:01:35.204059] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:33.774 [2024-12-05 21:01:35.204111] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:33.774 true 00:08:34.034 21:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 831203ce-c6ca-444f-89e7-edc7081ad0bd 00:08:34.034 21:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:34.034 21:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:34.034 21:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:34.294 21:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c3995296-bcee-4103-81ef-df3a8a82c2d7 00:08:34.554 21:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:34.554 [2024-12-05 21:01:35.898162] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.554 21:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.814 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1903441 00:08:34.814 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.814 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:34.814 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1903441 /var/tmp/bdevperf.sock 00:08:34.814 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1903441 ']' 00:08:34.814 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:34.814 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.814 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:34.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:34.814 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.814 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:34.814 [2024-12-05 21:01:36.114657] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:34.815 [2024-12-05 21:01:36.114710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1903441 ] 00:08:34.815 [2024-12-05 21:01:36.208358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.815 [2024-12-05 21:01:36.244271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.758 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.758 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:35.758 21:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:36.019 Nvme0n1 00:08:36.019 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:36.019 [ 00:08:36.019 { 00:08:36.019 "name": "Nvme0n1", 00:08:36.019 "aliases": [ 00:08:36.019 "c3995296-bcee-4103-81ef-df3a8a82c2d7" 00:08:36.019 ], 00:08:36.019 "product_name": "NVMe disk", 00:08:36.019 "block_size": 4096, 00:08:36.019 "num_blocks": 38912, 00:08:36.019 "uuid": "c3995296-bcee-4103-81ef-df3a8a82c2d7", 00:08:36.019 "numa_id": 0, 00:08:36.019 "assigned_rate_limits": { 00:08:36.019 "rw_ios_per_sec": 0, 00:08:36.019 "rw_mbytes_per_sec": 0, 00:08:36.019 "r_mbytes_per_sec": 0, 00:08:36.019 "w_mbytes_per_sec": 0 00:08:36.019 }, 00:08:36.019 "claimed": false, 00:08:36.019 "zoned": false, 00:08:36.019 "supported_io_types": { 00:08:36.019 "read": true, 00:08:36.019 "write": true, 00:08:36.019 "unmap": true, 00:08:36.019 "flush": true, 00:08:36.019 "reset": true, 00:08:36.019 "nvme_admin": true, 00:08:36.019 "nvme_io": true, 00:08:36.019 "nvme_io_md": false, 00:08:36.019 "write_zeroes": true, 00:08:36.019 "zcopy": false, 00:08:36.019 "get_zone_info": false, 00:08:36.019 "zone_management": false, 00:08:36.019 "zone_append": false, 00:08:36.019 "compare": true, 00:08:36.019 "compare_and_write": true, 00:08:36.019 "abort": true, 00:08:36.019 "seek_hole": false, 00:08:36.019 "seek_data": false, 00:08:36.019 "copy": true, 00:08:36.019 "nvme_iov_md": false 00:08:36.019 }, 00:08:36.019 "memory_domains": [ 00:08:36.019 { 00:08:36.019 "dma_device_id": "system", 00:08:36.019 "dma_device_type": 1 00:08:36.019 } 00:08:36.019 ], 00:08:36.019 "driver_specific": { 00:08:36.019 "nvme": [ 00:08:36.019 { 00:08:36.019 "trid": { 00:08:36.019 "trtype": "TCP", 00:08:36.019 "adrfam": "IPv4", 00:08:36.019 "traddr": "10.0.0.2", 00:08:36.019 "trsvcid": "4420", 00:08:36.019 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:36.019 }, 00:08:36.019 "ctrlr_data": { 00:08:36.019 "cntlid": 1, 00:08:36.019 "vendor_id": "0x8086", 00:08:36.019 "model_number": "SPDK bdev Controller", 00:08:36.019 "serial_number": "SPDK0", 00:08:36.019 "firmware_revision": "25.01", 00:08:36.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:36.019 "oacs": { 00:08:36.019 "security": 0, 00:08:36.019 "format": 0, 00:08:36.019 "firmware": 0, 00:08:36.019 "ns_manage": 0 00:08:36.019 }, 00:08:36.019 "multi_ctrlr": true, 00:08:36.019 "ana_reporting": false 00:08:36.019 }, 00:08:36.019 "vs": { 00:08:36.019 "nvme_version": "1.3" 00:08:36.019 }, 00:08:36.019 "ns_data": { 00:08:36.019 "id": 1, 00:08:36.019 "can_share": true 00:08:36.019 } 00:08:36.019 } 00:08:36.019 ], 00:08:36.019 "mp_policy": "active_passive" 00:08:36.019 } 00:08:36.019 } 00:08:36.019 ] 00:08:36.019 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1903611 00:08:36.019 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:36.019 21:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:36.280 Running I/O for 10 seconds... 00:08:37.221 Latency(us) 00:08:37.221 [2024-12-05T20:01:38.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.221 Nvme0n1 : 1.00 17973.00 70.21 0.00 0.00 0.00 0.00 0.00 00:08:37.221 [2024-12-05T20:01:38.658Z] =================================================================================================================== 00:08:37.221 [2024-12-05T20:01:38.658Z] Total : 17973.00 70.21 0.00 0.00 0.00 0.00 0.00 00:08:37.221 00:08:38.163 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 831203ce-c6ca-444f-89e7-edc7081ad0bd 00:08:38.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.163 Nvme0n1 : 2.00 18057.50 70.54 0.00 0.00 0.00 0.00 0.00 00:08:38.163 [2024-12-05T20:01:39.600Z] =================================================================================================================== 00:08:38.163 [2024-12-05T20:01:39.600Z] Total : 18057.50 70.54 0.00 0.00 0.00 0.00 0.00 00:08:38.163 00:08:38.163 true 00:08:38.163 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 831203ce-c6ca-444f-89e7-edc7081ad0bd 00:08:38.163 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:38.423 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:38.423 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:38.423 21:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1903611 00:08:39.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.365 Nvme0n1 : 3.00 18023.67 70.40 0.00 0.00 0.00 0.00 0.00 00:08:39.365 [2024-12-05T20:01:40.802Z] =================================================================================================================== 00:08:39.365 [2024-12-05T20:01:40.802Z] Total : 18023.67 70.40 0.00 0.00 0.00 0.00 0.00 00:08:39.365 00:08:40.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.310 Nvme0n1 : 4.00 18070.00 70.59 0.00 0.00 0.00 0.00 0.00 00:08:40.310 [2024-12-05T20:01:41.747Z] =================================================================================================================== 00:08:40.310 [2024-12-05T20:01:41.747Z] Total : 18070.00 70.59 0.00 0.00 0.00 0.00 0.00 00:08:40.310 00:08:41.252 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.252 Nvme0n1 : 5.00 18107.40 70.73 0.00 0.00 0.00 0.00 0.00 00:08:41.252 [2024-12-05T20:01:42.689Z] =================================================================================================================== 00:08:41.252 [2024-12-05T20:01:42.689Z] Total : 18107.40 70.73 0.00 0.00 0.00 0.00 0.00 00:08:41.252 00:08:42.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.193 Nvme0n1 : 6.00 18145.00 70.88 0.00 0.00 0.00 0.00 0.00 00:08:42.193 [2024-12-05T20:01:43.630Z] =================================================================================================================== 00:08:42.193 [2024-12-05T20:01:43.630Z] Total : 18145.00 70.88 0.00 0.00 0.00 0.00 0.00 00:08:42.193 00:08:43.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.136 Nvme0n1 : 7.00 18164.14 70.95 0.00 0.00 0.00 0.00 0.00 00:08:43.136 [2024-12-05T20:01:44.573Z] =================================================================================================================== 00:08:43.136 [2024-12-05T20:01:44.573Z] Total : 18164.14 70.95 0.00 0.00 0.00 0.00 0.00 00:08:43.136 00:08:44.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.099 Nvme0n1 : 8.00 18179.25 71.01 0.00 0.00 0.00 0.00 0.00 00:08:44.099 [2024-12-05T20:01:45.536Z] =================================================================================================================== 00:08:44.099 [2024-12-05T20:01:45.536Z] Total : 18179.25 71.01 0.00 0.00 0.00 0.00 0.00 00:08:44.099 00:08:45.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.483 Nvme0n1 : 9.00 18192.78 71.07 0.00 0.00 0.00 0.00 0.00 00:08:45.483 [2024-12-05T20:01:46.920Z] =================================================================================================================== 00:08:45.483 [2024-12-05T20:01:46.920Z] Total : 18192.78 71.07 0.00 0.00 0.00 0.00 0.00 00:08:45.483 00:08:46.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.423 Nvme0n1 : 10.00 18207.00 71.12 0.00 0.00 0.00 0.00 0.00 00:08:46.423 [2024-12-05T20:01:47.860Z] =================================================================================================================== 00:08:46.423 [2024-12-05T20:01:47.860Z] Total : 18207.00 71.12 0.00 0.00 0.00 0.00 0.00 00:08:46.423 00:08:46.423 00:08:46.423 Latency(us) 00:08:46.423 [2024-12-05T20:01:47.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.423 Nvme0n1 : 10.00 18204.95 71.11 0.00 0.00 7027.89 2075.31 17367.04 00:08:46.423 [2024-12-05T20:01:47.860Z] =================================================================================================================== 00:08:46.423 [2024-12-05T20:01:47.860Z] Total : 18204.95 71.11 0.00 0.00 7027.89 2075.31 17367.04 00:08:46.423 { 00:08:46.423 "results": [ 00:08:46.423 { 00:08:46.423 "job": "Nvme0n1", 00:08:46.423 "core_mask": "0x2", 00:08:46.423 "workload": "randwrite", 00:08:46.423 "status": "finished", 00:08:46.423 "queue_depth": 128, 00:08:46.423 "io_size": 4096, 00:08:46.423 "runtime": 10.004642, 00:08:46.423 "iops": 18204.949262552323, 00:08:46.423 "mibps": 71.11308305684501, 00:08:46.423 "io_failed": 0, 00:08:46.423 "io_timeout": 0, 00:08:46.423 "avg_latency_us": 7027.892987214541, 00:08:46.423 "min_latency_us": 2075.306666666667, 00:08:46.423 "max_latency_us": 17367.04 00:08:46.423 } 00:08:46.423 ], 00:08:46.423 "core_count": 1 00:08:46.423 } 00:08:46.423 21:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1903441 00:08:46.423 21:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1903441 ']' 00:08:46.423 21:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1903441 00:08:46.423 21:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:46.423 21:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.423 21:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1903441 00:08:46.423 21:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:46.423 21:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:46.423 21:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1903441' 00:08:46.423 killing process with pid 1903441 00:08:46.423 21:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1903441 00:08:46.423 Received shutdown signal, test time was about 10.000000 seconds 00:08:46.423 00:08:46.423 Latency(us) 00:08:46.423 [2024-12-05T20:01:47.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.423 [2024-12-05T20:01:47.860Z] =================================================================================================================== 00:08:46.423 [2024-12-05T20:01:47.860Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:46.423 21:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1903441 00:08:46.423 21:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:46.683 21:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:46.683 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 831203ce-c6ca-444f-89e7-edc7081ad0bd 00:08:46.683 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:46.943 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:46.943 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:46.943 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:47.203 [2024-12-05 21:01:48.428496] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:47.203 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 831203ce-c6ca-444f-89e7-edc7081ad0bd 00:08:47.203 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:47.203 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 831203ce-c6ca-444f-89e7-edc7081ad0bd 00:08:47.204 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.204 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:47.204 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.204 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:47.204 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.204 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:47.204 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.204 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:47.204 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 831203ce-c6ca-444f-89e7-edc7081ad0bd 00:08:47.204 request: 00:08:47.204 { 00:08:47.204 "uuid": "831203ce-c6ca-444f-89e7-edc7081ad0bd", 00:08:47.204 "method": "bdev_lvol_get_lvstores", 00:08:47.204 "req_id": 1 00:08:47.204 } 00:08:47.204 Got JSON-RPC error response 00:08:47.204 response: 00:08:47.204 { 00:08:47.204 "code": -19, 00:08:47.204 "message": "No such device" 00:08:47.204 } 00:08:47.204 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:47.204 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:47.204 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:47.204 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:47.204 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.463 aio_bdev 00:08:47.463 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c3995296-bcee-4103-81ef-df3a8a82c2d7 00:08:47.463 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c3995296-bcee-4103-81ef-df3a8a82c2d7 00:08:47.463 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.463 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:47.463 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.463 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.463 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.724 21:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c3995296-bcee-4103-81ef-df3a8a82c2d7 -t 2000 00:08:47.724 [ 00:08:47.724 { 00:08:47.724 "name": "c3995296-bcee-4103-81ef-df3a8a82c2d7", 00:08:47.724 "aliases": [ 00:08:47.724 "lvs/lvol" 00:08:47.724 ], 00:08:47.724 "product_name": "Logical Volume", 00:08:47.724 "block_size": 4096, 00:08:47.724 "num_blocks": 38912, 00:08:47.724 "uuid": "c3995296-bcee-4103-81ef-df3a8a82c2d7", 00:08:47.724 "assigned_rate_limits": { 00:08:47.724 "rw_ios_per_sec": 0, 00:08:47.724 "rw_mbytes_per_sec": 0, 00:08:47.724 "r_mbytes_per_sec": 0, 00:08:47.724 "w_mbytes_per_sec": 0 00:08:47.724 }, 00:08:47.724 "claimed": false, 00:08:47.724 "zoned": false, 00:08:47.724 "supported_io_types": { 00:08:47.724 "read": true, 00:08:47.724 "write": true, 00:08:47.724 "unmap": true, 00:08:47.724 "flush": false, 00:08:47.724 "reset": true, 00:08:47.724 "nvme_admin": false, 00:08:47.724 "nvme_io": false, 00:08:47.724 "nvme_io_md": false, 00:08:47.724 "write_zeroes": true, 00:08:47.724 "zcopy": false, 00:08:47.724 "get_zone_info": false, 00:08:47.724 "zone_management": false, 00:08:47.724 "zone_append": false, 00:08:47.724 "compare": false, 00:08:47.724 "compare_and_write": false, 00:08:47.724 "abort": false, 00:08:47.724 "seek_hole": true, 00:08:47.724 "seek_data": true, 00:08:47.724 "copy": false, 00:08:47.724 "nvme_iov_md": false 00:08:47.724 }, 00:08:47.724 "driver_specific": { 00:08:47.724 "lvol": { 00:08:47.724 "lvol_store_uuid": "831203ce-c6ca-444f-89e7-edc7081ad0bd", 00:08:47.724 "base_bdev": "aio_bdev", 00:08:47.724 "thin_provision": false, 00:08:47.724 "num_allocated_clusters": 38, 00:08:47.724 "snapshot": false, 00:08:47.724 "clone": false, 00:08:47.724 "esnap_clone": false 00:08:47.724 } 00:08:47.724 } 00:08:47.724 } 00:08:47.724 ] 00:08:47.724 21:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:47.724 21:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 831203ce-c6ca-444f-89e7-edc7081ad0bd 00:08:47.724 21:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:47.985 21:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:47.985 21:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 831203ce-c6ca-444f-89e7-edc7081ad0bd 00:08:47.985 21:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:48.246 21:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:48.246 21:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c3995296-bcee-4103-81ef-df3a8a82c2d7 00:08:48.246 21:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 831203ce-c6ca-444f-89e7-edc7081ad0bd 00:08:48.506 21:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.806 21:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.806 00:08:48.806 real 0m15.625s 00:08:48.806 user 0m15.428s 00:08:48.806 sys 0m1.287s 00:08:48.806 21:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.806 21:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:48.806 ************************************ 00:08:48.806 END TEST lvs_grow_clean 00:08:48.806 ************************************ 00:08:48.806 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:48.806 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:48.806 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.806 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.806 ************************************ 00:08:48.806 START TEST lvs_grow_dirty 00:08:48.806 ************************************ 00:08:48.806 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:48.807 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:48.807 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:48.807 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:48.807 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:48.807 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:48.807 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:48.807 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.807 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.807 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:49.113 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:49.113 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:49.113 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=108ab32b-09a1-4b0d-a843-962ad4ec14e6 00:08:49.113 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 108ab32b-09a1-4b0d-a843-962ad4ec14e6 00:08:49.113 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:49.390 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:49.390 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:49.390 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 108ab32b-09a1-4b0d-a843-962ad4ec14e6 lvol 150 00:08:49.390 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a4917a86-3dac-440c-bf2b-7bf4a6889375 00:08:49.390 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:49.390 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:49.650 [2024-12-05 21:01:50.897014] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:49.650 [2024-12-05 21:01:50.897063] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:49.650 true 00:08:49.650 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:49.650 21:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 108ab32b-09a1-4b0d-a843-962ad4ec14e6 00:08:49.650 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:49.650 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:49.911 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a4917a86-3dac-440c-bf2b-7bf4a6889375 00:08:50.172 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:50.172 [2024-12-05 21:01:51.542968] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.172 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:50.432 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1906580 00:08:50.432 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:50.432 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:50.432 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1906580 /var/tmp/bdevperf.sock 00:08:50.432 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1906580 ']' 00:08:50.432 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:50.432 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.432 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:50.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:50.432 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.432 21:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:50.432 [2024-12-05 21:01:51.760409] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:08:50.432 [2024-12-05 21:01:51.760459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906580 ] 00:08:50.432 [2024-12-05 21:01:51.851028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.692 [2024-12-05 21:01:51.881347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.263 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.263 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:51.263 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:51.524 Nvme0n1 00:08:51.524 21:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:51.784 [ 00:08:51.784 { 00:08:51.784 "name": "Nvme0n1", 00:08:51.784 "aliases": [ 00:08:51.784 "a4917a86-3dac-440c-bf2b-7bf4a6889375" 00:08:51.784 ], 00:08:51.784 "product_name": "NVMe disk", 00:08:51.784 "block_size": 4096, 00:08:51.784 "num_blocks": 38912, 00:08:51.784 "uuid": "a4917a86-3dac-440c-bf2b-7bf4a6889375", 00:08:51.784 "numa_id": 0, 00:08:51.784 "assigned_rate_limits": { 00:08:51.784 "rw_ios_per_sec": 0, 00:08:51.784 "rw_mbytes_per_sec": 0, 00:08:51.784 "r_mbytes_per_sec": 0, 00:08:51.784 "w_mbytes_per_sec": 0 00:08:51.784 }, 00:08:51.784 "claimed": false, 00:08:51.784 "zoned": false, 00:08:51.784 "supported_io_types": { 00:08:51.784 "read": true, 00:08:51.784 "write": true, 00:08:51.784 "unmap": true, 00:08:51.784 "flush": true, 00:08:51.784 "reset": true, 00:08:51.784 "nvme_admin": true, 00:08:51.784 "nvme_io": true, 00:08:51.784 "nvme_io_md": false, 00:08:51.784 "write_zeroes": true, 00:08:51.784 "zcopy": false, 00:08:51.784 "get_zone_info": false, 00:08:51.784 "zone_management": false, 00:08:51.784 "zone_append": false, 00:08:51.784 "compare": true, 00:08:51.784 "compare_and_write": true, 00:08:51.784 "abort": true, 00:08:51.784 "seek_hole": false, 00:08:51.784 "seek_data": false, 00:08:51.784 "copy": true, 00:08:51.784 "nvme_iov_md": false 00:08:51.784 }, 00:08:51.784 "memory_domains": [ 00:08:51.784 { 00:08:51.784 "dma_device_id": "system", 00:08:51.784 "dma_device_type": 1 00:08:51.784 } 00:08:51.784 ], 00:08:51.784 "driver_specific": { 00:08:51.784 "nvme": [ 00:08:51.784 { 00:08:51.784 "trid": { 00:08:51.784 "trtype": "TCP", 00:08:51.784 "adrfam": "IPv4", 00:08:51.784 "traddr": "10.0.0.2", 00:08:51.784 "trsvcid": "4420", 00:08:51.784 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:51.784 }, 00:08:51.784 "ctrlr_data": { 00:08:51.784 "cntlid": 1, 00:08:51.784 "vendor_id": "0x8086", 00:08:51.784 "model_number": "SPDK bdev Controller", 00:08:51.784 "serial_number": "SPDK0", 00:08:51.784 "firmware_revision": "25.01", 00:08:51.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:51.784 "oacs": { 00:08:51.784 "security": 0, 00:08:51.784 "format": 0, 00:08:51.784 "firmware": 0, 00:08:51.784 "ns_manage": 0 00:08:51.784 }, 00:08:51.784 "multi_ctrlr": true, 00:08:51.784 "ana_reporting": false 00:08:51.784 }, 00:08:51.784 "vs": { 00:08:51.784 "nvme_version": "1.3" 00:08:51.784 }, 00:08:51.784 "ns_data": { 00:08:51.784 "id": 1, 00:08:51.784 "can_share": true 00:08:51.784 } 00:08:51.784 } 00:08:51.784 ], 00:08:51.784 "mp_policy": "active_passive" 00:08:51.784 } 00:08:51.784 } 00:08:51.784 ] 00:08:51.784 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1906920 00:08:51.784 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:51.784 21:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:51.784 Running I/O for 10 seconds... 00:08:53.168 Latency(us) 00:08:53.168 [2024-12-05T20:01:54.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.168 Nvme0n1 : 1.00 17832.00 69.66 0.00 0.00 0.00 0.00 0.00 00:08:53.168 [2024-12-05T20:01:54.605Z] =================================================================================================================== 00:08:53.168 [2024-12-05T20:01:54.605Z] Total : 17832.00 69.66 0.00 0.00 0.00 0.00 0.00 00:08:53.168 00:08:53.740 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 108ab32b-09a1-4b0d-a843-962ad4ec14e6 00:08:54.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.000 Nvme0n1 : 2.00 17961.00 70.16 0.00 0.00 0.00 0.00 0.00 00:08:54.000 [2024-12-05T20:01:55.437Z] =================================================================================================================== 00:08:54.000 [2024-12-05T20:01:55.437Z] Total : 17961.00 70.16 0.00 0.00 0.00 0.00 0.00 00:08:54.000 00:08:54.000 true 00:08:54.000 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 108ab32b-09a1-4b0d-a843-962ad4ec14e6 00:08:54.000 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:54.259 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:54.259 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:54.259 21:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1906920 00:08:54.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.830 Nvme0n1 : 3.00 18044.67 70.49 0.00 0.00 0.00 0.00 0.00 00:08:54.830 [2024-12-05T20:01:56.267Z] =================================================================================================================== 00:08:54.830 [2024-12-05T20:01:56.267Z] Total : 18044.67 70.49 0.00 0.00 0.00 0.00 0.00 00:08:54.830 00:08:55.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.775 Nvme0n1 : 4.00 18084.75 70.64 0.00 0.00 0.00 0.00 0.00 00:08:55.775 [2024-12-05T20:01:57.212Z] =================================================================================================================== 00:08:55.775 [2024-12-05T20:01:57.212Z] Total : 18084.75 70.64 0.00 0.00 0.00 0.00 0.00 00:08:55.775 00:08:57.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.163 Nvme0n1 : 5.00 18119.20 70.78 0.00 0.00 0.00 0.00 0.00 00:08:57.163 [2024-12-05T20:01:58.600Z] =================================================================================================================== 00:08:57.163 [2024-12-05T20:01:58.600Z] Total : 18119.20 70.78 0.00 0.00 0.00 0.00 0.00 00:08:57.163 00:08:58.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.106 Nvme0n1 : 6.00 18152.67 70.91 0.00 0.00 0.00 0.00 0.00 00:08:58.106 [2024-12-05T20:01:59.543Z] =================================================================================================================== 00:08:58.106 [2024-12-05T20:01:59.543Z] Total : 18152.67 70.91 0.00 0.00 0.00 0.00 0.00 00:08:58.106 00:08:59.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.047 Nvme0n1 : 7.00 18179.00 71.01 0.00 0.00 0.00 0.00 0.00 00:08:59.047 [2024-12-05T20:02:00.484Z] =================================================================================================================== 00:08:59.047 [2024-12-05T20:02:00.484Z] Total : 18179.00 71.01 0.00 0.00 0.00 0.00 0.00 00:08:59.047 00:08:59.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.989 Nvme0n1 : 8.00 18182.75 71.03 0.00 0.00 0.00 0.00 0.00 00:08:59.989 [2024-12-05T20:02:01.427Z] =================================================================================================================== 00:08:59.990 [2024-12-05T20:02:01.427Z] Total : 18182.75 71.03 0.00 0.00 0.00 0.00 0.00 00:08:59.990 00:09:00.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.930 Nvme0n1 : 9.00 18200.78 71.10 0.00 0.00 0.00 0.00 0.00 00:09:00.930 [2024-12-05T20:02:02.367Z] =================================================================================================================== 00:09:00.930 [2024-12-05T20:02:02.367Z] Total : 18200.78 71.10 0.00 0.00 0.00 0.00 0.00 00:09:00.930 00:09:01.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.873 Nvme0n1 : 10.00 18210.60 71.14 0.00 0.00 0.00 0.00 0.00 00:09:01.873 [2024-12-05T20:02:03.310Z] =================================================================================================================== 00:09:01.873 [2024-12-05T20:02:03.310Z] Total : 18210.60 71.14 0.00 0.00 0.00 0.00 0.00 00:09:01.873 00:09:01.873 00:09:01.873 Latency(us) 00:09:01.873 [2024-12-05T20:02:03.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.873 Nvme0n1 : 10.01 18213.82 71.15 0.00 0.00 7024.43 4341.76 17039.36 00:09:01.873 [2024-12-05T20:02:03.310Z] =================================================================================================================== 00:09:01.873 [2024-12-05T20:02:03.310Z] Total : 18213.82 71.15 0.00 0.00 7024.43 4341.76 17039.36 00:09:01.873 { 00:09:01.873 "results": [ 00:09:01.873 { 00:09:01.873 "job": "Nvme0n1", 00:09:01.873 "core_mask": "0x2", 00:09:01.873 "workload": "randwrite", 00:09:01.873 "status": "finished", 00:09:01.873 "queue_depth": 128, 00:09:01.873 "io_size": 4096, 00:09:01.873 "runtime": 10.005261, 00:09:01.873 "iops": 18213.817710502506, 00:09:01.873 "mibps": 71.14772543165041, 00:09:01.873 "io_failed": 0, 00:09:01.874 "io_timeout": 0, 00:09:01.874 "avg_latency_us": 7024.430368281075, 00:09:01.874 "min_latency_us": 4341.76, 00:09:01.874 "max_latency_us": 17039.36 00:09:01.874 } 00:09:01.874 ], 00:09:01.874 "core_count": 1 00:09:01.874 } 00:09:01.874 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1906580 00:09:01.874 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1906580 ']' 00:09:01.874 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1906580 00:09:01.874 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:01.874 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.874 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1906580 00:09:01.874 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:01.874 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:01.874 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1906580' 00:09:01.874 killing process with pid 1906580 00:09:01.874 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1906580 00:09:01.874 Received shutdown signal, test time was about 10.000000 seconds 00:09:01.874 00:09:01.874 Latency(us) 00:09:01.874 [2024-12-05T20:02:03.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.874 [2024-12-05T20:02:03.311Z] =================================================================================================================== 00:09:01.874 [2024-12-05T20:02:03.311Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:01.874 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1906580 00:09:02.135 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:02.135 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:02.396 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 108ab32b-09a1-4b0d-a843-962ad4ec14e6 00:09:02.396 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1902775 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1902775 00:09:02.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1902775 Killed "${NVMF_APP[@]}" "$@" 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1908953 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1908953 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1908953 ']' 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.657 21:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:02.657 [2024-12-05 21:02:03.997826] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:02.657 [2024-12-05 21:02:03.997889] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.657 [2024-12-05 21:02:04.083839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.918 [2024-12-05 21:02:04.120018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.918 [2024-12-05 21:02:04.120049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.918 [2024-12-05 21:02:04.120057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.918 [2024-12-05 21:02:04.120064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.918 [2024-12-05 21:02:04.120070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.918 [2024-12-05 21:02:04.120680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.492 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.492 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:03.492 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:03.492 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:03.492 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:03.492 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.492 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:03.754 [2024-12-05 21:02:04.980541] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:03.754 [2024-12-05 21:02:04.980635] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:03.754 [2024-12-05 21:02:04.980666] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:03.754 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:03.754 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a4917a86-3dac-440c-bf2b-7bf4a6889375 00:09:03.754 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a4917a86-3dac-440c-bf2b-7bf4a6889375 00:09:03.754 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.754 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:03.754 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.754 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.754 21:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:03.754 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a4917a86-3dac-440c-bf2b-7bf4a6889375 -t 2000 00:09:04.013 [ 00:09:04.013 { 00:09:04.013 "name": "a4917a86-3dac-440c-bf2b-7bf4a6889375", 00:09:04.013 "aliases": [ 00:09:04.013 "lvs/lvol" 00:09:04.013 ], 00:09:04.013 "product_name": "Logical Volume", 00:09:04.013 "block_size": 4096, 00:09:04.013 "num_blocks": 38912, 00:09:04.013 "uuid": "a4917a86-3dac-440c-bf2b-7bf4a6889375", 00:09:04.013 "assigned_rate_limits": { 00:09:04.013 "rw_ios_per_sec": 0, 00:09:04.013 "rw_mbytes_per_sec": 0, 00:09:04.013 "r_mbytes_per_sec": 0, 00:09:04.013 "w_mbytes_per_sec": 0 00:09:04.013 }, 00:09:04.013 "claimed": false, 00:09:04.013 "zoned": false, 00:09:04.013 "supported_io_types": { 00:09:04.013 "read": true, 00:09:04.013 "write": true, 00:09:04.013 "unmap": true, 00:09:04.013 "flush": false, 00:09:04.013 "reset": true, 00:09:04.013 "nvme_admin": false, 00:09:04.013 "nvme_io": false, 00:09:04.013 "nvme_io_md": false, 00:09:04.013 "write_zeroes": true, 00:09:04.013 "zcopy": false, 00:09:04.013 "get_zone_info": false, 00:09:04.013 "zone_management": false, 00:09:04.013 "zone_append": false, 00:09:04.013 "compare": false, 00:09:04.013 "compare_and_write": false, 00:09:04.013 "abort": false, 00:09:04.013 "seek_hole": true, 00:09:04.013 "seek_data": true, 00:09:04.013 "copy": false, 00:09:04.013 "nvme_iov_md": false 00:09:04.013 }, 00:09:04.013 "driver_specific": { 00:09:04.014 "lvol": { 00:09:04.014 "lvol_store_uuid": "108ab32b-09a1-4b0d-a843-962ad4ec14e6", 00:09:04.014 "base_bdev": "aio_bdev", 00:09:04.014 "thin_provision": false, 00:09:04.014 "num_allocated_clusters": 38, 00:09:04.014 "snapshot": false, 00:09:04.014 "clone": false, 00:09:04.014 "esnap_clone": false 00:09:04.014 } 00:09:04.014 } 00:09:04.014 } 00:09:04.014 ] 00:09:04.014 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:04.014 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 108ab32b-09a1-4b0d-a843-962ad4ec14e6 00:09:04.014 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:04.273 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:04.273 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 108ab32b-09a1-4b0d-a843-962ad4ec14e6 00:09:04.273 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:04.273 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:04.273 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:04.533 [2024-12-05 21:02:05.848718] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:04.533 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 108ab32b-09a1-4b0d-a843-962ad4ec14e6 00:09:04.533 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:04.534 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 108ab32b-09a1-4b0d-a843-962ad4ec14e6 00:09:04.534 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:04.534 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.534 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:04.534 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.534 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:04.534 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.534 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:04.534 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:04.534 21:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 108ab32b-09a1-4b0d-a843-962ad4ec14e6 00:09:04.795 request: 00:09:04.795 { 00:09:04.795 "uuid": "108ab32b-09a1-4b0d-a843-962ad4ec14e6", 00:09:04.795 "method": "bdev_lvol_get_lvstores", 00:09:04.795 "req_id": 1 00:09:04.795 } 00:09:04.795 Got JSON-RPC error response 00:09:04.795 response: 00:09:04.795 { 00:09:04.795 "code": -19, 00:09:04.795 "message": "No such device" 00:09:04.795 } 00:09:04.795 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:04.795 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:04.795 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:04.795 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:04.795 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:04.795 aio_bdev 00:09:04.795 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a4917a86-3dac-440c-bf2b-7bf4a6889375 00:09:05.056 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a4917a86-3dac-440c-bf2b-7bf4a6889375 00:09:05.056 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.056 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:05.056 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.056 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.056 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:05.056 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a4917a86-3dac-440c-bf2b-7bf4a6889375 -t 2000 00:09:05.317 [ 00:09:05.317 { 00:09:05.317 "name": "a4917a86-3dac-440c-bf2b-7bf4a6889375", 00:09:05.317 "aliases": [ 00:09:05.317 "lvs/lvol" 00:09:05.317 ], 00:09:05.317 "product_name": "Logical Volume", 00:09:05.317 "block_size": 4096, 00:09:05.317 "num_blocks": 38912, 00:09:05.317 "uuid": "a4917a86-3dac-440c-bf2b-7bf4a6889375", 00:09:05.317 "assigned_rate_limits": { 00:09:05.317 "rw_ios_per_sec": 0, 00:09:05.317 "rw_mbytes_per_sec": 0, 00:09:05.317 "r_mbytes_per_sec": 0, 00:09:05.317 "w_mbytes_per_sec": 0 00:09:05.317 }, 00:09:05.317 "claimed": false, 00:09:05.317 "zoned": false, 00:09:05.317 "supported_io_types": { 00:09:05.317 "read": true, 00:09:05.317 "write": true, 00:09:05.317 "unmap": true, 00:09:05.317 "flush": false, 00:09:05.317 "reset": true, 00:09:05.317 "nvme_admin": false, 00:09:05.317 "nvme_io": false, 00:09:05.317 "nvme_io_md": false, 00:09:05.317 "write_zeroes": true, 00:09:05.317 "zcopy": false, 00:09:05.317 "get_zone_info": false, 00:09:05.317 "zone_management": false, 00:09:05.317 "zone_append": false, 00:09:05.317 "compare": false, 00:09:05.317 "compare_and_write": false, 00:09:05.317 "abort": false, 00:09:05.317 "seek_hole": true, 00:09:05.317 "seek_data": true, 00:09:05.317 "copy": false, 00:09:05.317 "nvme_iov_md": false 00:09:05.317 }, 00:09:05.317 "driver_specific": { 00:09:05.317 "lvol": { 00:09:05.317 "lvol_store_uuid": "108ab32b-09a1-4b0d-a843-962ad4ec14e6", 00:09:05.317 "base_bdev": "aio_bdev", 00:09:05.317 "thin_provision": false, 00:09:05.317 "num_allocated_clusters": 38, 00:09:05.317 "snapshot": false, 00:09:05.317 "clone": false, 00:09:05.317 "esnap_clone": false 00:09:05.317 } 00:09:05.317 } 00:09:05.317 } 00:09:05.317 ] 00:09:05.317 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:05.317 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 108ab32b-09a1-4b0d-a843-962ad4ec14e6 00:09:05.317 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:05.317 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:05.317 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 108ab32b-09a1-4b0d-a843-962ad4ec14e6 00:09:05.317 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:05.578 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:05.578 21:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a4917a86-3dac-440c-bf2b-7bf4a6889375 00:09:05.839 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 108ab32b-09a1-4b0d-a843-962ad4ec14e6 00:09:05.839 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:06.101 00:09:06.101 real 0m17.334s 00:09:06.101 user 0m45.501s 00:09:06.101 sys 0m2.889s 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:06.101 ************************************ 00:09:06.101 END TEST lvs_grow_dirty 00:09:06.101 ************************************ 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:06.101 nvmf_trace.0 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:06.101 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:06.101 rmmod nvme_tcp 00:09:06.101 rmmod nvme_fabrics 00:09:06.101 rmmod nvme_keyring 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1908953 ']' 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1908953 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1908953 ']' 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1908953 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1908953 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1908953' 00:09:06.363 killing process with pid 1908953 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1908953 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1908953 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.363 21:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.908 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:08.909 00:09:08.909 real 0m45.299s 00:09:08.909 user 1m7.539s 00:09:08.909 sys 0m10.972s 00:09:08.909 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.909 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.909 ************************************ 00:09:08.909 END TEST nvmf_lvs_grow 00:09:08.909 ************************************ 00:09:08.909 21:02:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:08.909 21:02:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:08.909 21:02:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.909 21:02:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.909 ************************************ 00:09:08.909 START TEST nvmf_bdev_io_wait 00:09:08.909 ************************************ 00:09:08.909 21:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:08.909 * Looking for test storage... 00:09:08.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:08.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.909 --rc genhtml_branch_coverage=1 00:09:08.909 --rc genhtml_function_coverage=1 00:09:08.909 --rc genhtml_legend=1 00:09:08.909 --rc geninfo_all_blocks=1 00:09:08.909 --rc geninfo_unexecuted_blocks=1 00:09:08.909 00:09:08.909 ' 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:08.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.909 --rc genhtml_branch_coverage=1 00:09:08.909 --rc genhtml_function_coverage=1 00:09:08.909 --rc genhtml_legend=1 00:09:08.909 --rc geninfo_all_blocks=1 00:09:08.909 --rc geninfo_unexecuted_blocks=1 00:09:08.909 00:09:08.909 ' 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:08.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.909 --rc genhtml_branch_coverage=1 00:09:08.909 --rc genhtml_function_coverage=1 00:09:08.909 --rc genhtml_legend=1 00:09:08.909 --rc geninfo_all_blocks=1 00:09:08.909 --rc geninfo_unexecuted_blocks=1 00:09:08.909 00:09:08.909 ' 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:08.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.909 --rc genhtml_branch_coverage=1 00:09:08.909 --rc genhtml_function_coverage=1 00:09:08.909 --rc genhtml_legend=1 00:09:08.909 --rc geninfo_all_blocks=1 00:09:08.909 --rc geninfo_unexecuted_blocks=1 00:09:08.909 00:09:08.909 ' 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.909 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:08.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:08.910 21:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:17.046 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:17.046 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:17.046 Found net devices under 0000:31:00.0: cvl_0_0 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:17.046 Found net devices under 0000:31:00.1: cvl_0_1 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.046 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:17.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:09:17.047 00:09:17.047 --- 10.0.0.2 ping statistics --- 00:09:17.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.047 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:09:17.047 00:09:17.047 --- 10.0.0.1 ping statistics --- 00:09:17.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.047 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1914388 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1914388 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1914388 ']' 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.047 21:02:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:17.047 [2024-12-05 21:02:17.973056] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:17.047 [2024-12-05 21:02:17.973126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.047 [2024-12-05 21:02:18.063961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.047 [2024-12-05 21:02:18.106891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.047 [2024-12-05 21:02:18.106929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.047 [2024-12-05 21:02:18.106937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.047 [2024-12-05 21:02:18.106947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.047 [2024-12-05 21:02:18.106953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.047 [2024-12-05 21:02:18.108572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.047 [2024-12-05 21:02:18.108695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.047 [2024-12-05 21:02:18.108854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.047 [2024-12-05 21:02:18.108855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.618 [2024-12-05 21:02:18.853171] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.618 Malloc0 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.618 [2024-12-05 21:02:18.896279] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1914737 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1914739 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1914740 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1914742 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:17.618 { 00:09:17.618 "params": { 00:09:17.618 "name": "Nvme$subsystem", 00:09:17.618 "trtype": "$TEST_TRANSPORT", 00:09:17.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:17.618 "adrfam": "ipv4", 00:09:17.618 "trsvcid": "$NVMF_PORT", 00:09:17.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:17.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:17.618 "hdgst": ${hdgst:-false}, 00:09:17.618 "ddgst": ${ddgst:-false} 00:09:17.618 }, 00:09:17.618 "method": "bdev_nvme_attach_controller" 00:09:17.618 } 00:09:17.618 EOF 00:09:17.618 )") 00:09:17.618 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:17.619 { 00:09:17.619 "params": { 00:09:17.619 "name": "Nvme$subsystem", 00:09:17.619 "trtype": "$TEST_TRANSPORT", 00:09:17.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:17.619 "adrfam": "ipv4", 00:09:17.619 "trsvcid": "$NVMF_PORT", 00:09:17.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:17.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:17.619 "hdgst": ${hdgst:-false}, 00:09:17.619 "ddgst": ${ddgst:-false} 00:09:17.619 }, 00:09:17.619 "method": "bdev_nvme_attach_controller" 00:09:17.619 } 00:09:17.619 EOF 00:09:17.619 )") 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:17.619 { 00:09:17.619 "params": { 00:09:17.619 "name": "Nvme$subsystem", 00:09:17.619 "trtype": "$TEST_TRANSPORT", 00:09:17.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:17.619 "adrfam": "ipv4", 00:09:17.619 "trsvcid": "$NVMF_PORT", 00:09:17.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:17.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:17.619 "hdgst": ${hdgst:-false}, 00:09:17.619 "ddgst": ${ddgst:-false} 00:09:17.619 }, 00:09:17.619 "method": "bdev_nvme_attach_controller" 00:09:17.619 } 00:09:17.619 EOF 00:09:17.619 )") 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:17.619 { 00:09:17.619 "params": { 00:09:17.619 "name": "Nvme$subsystem", 00:09:17.619 "trtype": "$TEST_TRANSPORT", 00:09:17.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:17.619 "adrfam": "ipv4", 00:09:17.619 "trsvcid": "$NVMF_PORT", 00:09:17.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:17.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:17.619 "hdgst": ${hdgst:-false}, 00:09:17.619 "ddgst": ${ddgst:-false} 00:09:17.619 }, 00:09:17.619 "method": "bdev_nvme_attach_controller" 00:09:17.619 } 00:09:17.619 EOF 00:09:17.619 )") 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1914737 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:17.619 "params": { 00:09:17.619 "name": "Nvme1", 00:09:17.619 "trtype": "tcp", 00:09:17.619 "traddr": "10.0.0.2", 00:09:17.619 "adrfam": "ipv4", 00:09:17.619 "trsvcid": "4420", 00:09:17.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:17.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:17.619 "hdgst": false, 00:09:17.619 "ddgst": false 00:09:17.619 }, 00:09:17.619 "method": "bdev_nvme_attach_controller" 00:09:17.619 }' 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:17.619 "params": { 00:09:17.619 "name": "Nvme1", 00:09:17.619 "trtype": "tcp", 00:09:17.619 "traddr": "10.0.0.2", 00:09:17.619 "adrfam": "ipv4", 00:09:17.619 "trsvcid": "4420", 00:09:17.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:17.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:17.619 "hdgst": false, 00:09:17.619 "ddgst": false 00:09:17.619 }, 00:09:17.619 "method": "bdev_nvme_attach_controller" 00:09:17.619 }' 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:17.619 "params": { 00:09:17.619 "name": "Nvme1", 00:09:17.619 "trtype": "tcp", 00:09:17.619 "traddr": "10.0.0.2", 00:09:17.619 "adrfam": "ipv4", 00:09:17.619 "trsvcid": "4420", 00:09:17.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:17.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:17.619 "hdgst": false, 00:09:17.619 "ddgst": false 00:09:17.619 }, 00:09:17.619 "method": "bdev_nvme_attach_controller" 00:09:17.619 }' 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:17.619 21:02:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:17.619 "params": { 00:09:17.619 "name": "Nvme1", 00:09:17.619 "trtype": "tcp", 00:09:17.619 "traddr": "10.0.0.2", 00:09:17.619 "adrfam": "ipv4", 00:09:17.619 "trsvcid": "4420", 00:09:17.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:17.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:17.619 "hdgst": false, 00:09:17.619 "ddgst": false 00:09:17.619 }, 00:09:17.619 "method": "bdev_nvme_attach_controller" 00:09:17.619 }' 00:09:17.619 [2024-12-05 21:02:18.952336] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:17.619 [2024-12-05 21:02:18.952336] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:17.619 [2024-12-05 21:02:18.952390] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-05 21:02:18.952391] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:17.619 --proc-type=auto ] 00:09:17.619 [2024-12-05 21:02:18.954071] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:17.619 [2024-12-05 21:02:18.954118] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:17.619 [2024-12-05 21:02:18.955192] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:17.619 [2024-12-05 21:02:18.955238] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:17.880 [2024-12-05 21:02:19.126309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.880 [2024-12-05 21:02:19.155261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:17.880 [2024-12-05 21:02:19.179191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.880 [2024-12-05 21:02:19.207321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:17.880 [2024-12-05 21:02:19.226802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.880 [2024-12-05 21:02:19.255748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:17.880 [2024-12-05 21:02:19.287673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.140 [2024-12-05 21:02:19.316813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:18.140 Running I/O for 1 seconds... 00:09:18.140 Running I/O for 1 seconds... 00:09:18.140 Running I/O for 1 seconds... 00:09:18.400 Running I/O for 1 seconds... 00:09:18.969 180200.00 IOPS, 703.91 MiB/s 00:09:18.969 Latency(us) 00:09:18.969 [2024-12-05T20:02:20.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.969 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:18.969 Nvme1n1 : 1.00 179847.41 702.53 0.00 0.00 707.61 296.96 1966.08 00:09:18.969 [2024-12-05T20:02:20.406Z] =================================================================================================================== 00:09:18.969 [2024-12-05T20:02:20.406Z] Total : 179847.41 702.53 0.00 0.00 707.61 296.96 1966.08 00:09:19.229 9071.00 IOPS, 35.43 MiB/s 00:09:19.229 Latency(us) 00:09:19.229 [2024-12-05T20:02:20.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.229 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:19.229 Nvme1n1 : 1.02 9073.80 35.44 0.00 0.00 13973.40 6608.21 27962.03 00:09:19.229 [2024-12-05T20:02:20.666Z] =================================================================================================================== 00:09:19.229 [2024-12-05T20:02:20.666Z] Total : 9073.80 35.44 0.00 0.00 13973.40 6608.21 27962.03 00:09:19.229 18147.00 IOPS, 70.89 MiB/s 00:09:19.229 Latency(us) 00:09:19.229 [2024-12-05T20:02:20.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.229 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:19.229 Nvme1n1 : 1.01 18178.94 71.01 0.00 0.00 7020.46 3495.25 14854.83 00:09:19.229 [2024-12-05T20:02:20.666Z] =================================================================================================================== 00:09:19.229 [2024-12-05T20:02:20.666Z] Total : 18178.94 71.01 0.00 0.00 7020.46 3495.25 14854.83 00:09:19.229 8872.00 IOPS, 34.66 MiB/s 00:09:19.229 Latency(us) 00:09:19.229 [2024-12-05T20:02:20.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.229 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:19.229 Nvme1n1 : 1.01 8979.33 35.08 0.00 0.00 14213.76 4177.92 33641.81 00:09:19.229 [2024-12-05T20:02:20.666Z] =================================================================================================================== 00:09:19.229 [2024-12-05T20:02:20.666Z] Total : 8979.33 35.08 0.00 0.00 14213.76 4177.92 33641.81 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1914739 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1914740 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1914742 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.490 rmmod nvme_tcp 00:09:19.490 rmmod nvme_fabrics 00:09:19.490 rmmod nvme_keyring 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1914388 ']' 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1914388 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1914388 ']' 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1914388 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1914388 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1914388' 00:09:19.490 killing process with pid 1914388 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1914388 00:09:19.490 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1914388 00:09:19.751 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.751 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.751 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.752 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:19.752 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:19.752 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.752 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.752 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.752 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.752 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.752 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.752 21:02:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.664 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:21.664 00:09:21.664 real 0m13.123s 00:09:21.664 user 0m18.737s 00:09:21.664 sys 0m7.395s 00:09:21.664 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.664 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.664 ************************************ 00:09:21.664 END TEST nvmf_bdev_io_wait 00:09:21.664 ************************************ 00:09:21.664 21:02:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:21.664 21:02:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:21.664 21:02:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.664 21:02:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.927 ************************************ 00:09:21.927 START TEST nvmf_queue_depth 00:09:21.927 ************************************ 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:21.927 * Looking for test storage... 00:09:21.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:21.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.927 --rc genhtml_branch_coverage=1 00:09:21.927 --rc genhtml_function_coverage=1 00:09:21.927 --rc genhtml_legend=1 00:09:21.927 --rc geninfo_all_blocks=1 00:09:21.927 --rc geninfo_unexecuted_blocks=1 00:09:21.927 00:09:21.927 ' 00:09:21.927 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:21.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.927 --rc genhtml_branch_coverage=1 00:09:21.927 --rc genhtml_function_coverage=1 00:09:21.927 --rc genhtml_legend=1 00:09:21.927 --rc geninfo_all_blocks=1 00:09:21.927 --rc geninfo_unexecuted_blocks=1 00:09:21.927 00:09:21.927 ' 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:21.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.928 --rc genhtml_branch_coverage=1 00:09:21.928 --rc genhtml_function_coverage=1 00:09:21.928 --rc genhtml_legend=1 00:09:21.928 --rc geninfo_all_blocks=1 00:09:21.928 --rc geninfo_unexecuted_blocks=1 00:09:21.928 00:09:21.928 ' 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:21.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.928 --rc genhtml_branch_coverage=1 00:09:21.928 --rc genhtml_function_coverage=1 00:09:21.928 --rc genhtml_legend=1 00:09:21.928 --rc geninfo_all_blocks=1 00:09:21.928 --rc geninfo_unexecuted_blocks=1 00:09:21.928 00:09:21.928 ' 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.928 21:02:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:30.112 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.112 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:30.113 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:30.113 Found net devices under 0000:31:00.0: cvl_0_0 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:30.113 Found net devices under 0000:31:00.1: cvl_0_1 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:30.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:09:30.113 00:09:30.113 --- 10.0.0.2 ping statistics --- 00:09:30.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.113 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:09:30.113 00:09:30.113 --- 10.0.0.1 ping statistics --- 00:09:30.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.113 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1919795 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1919795 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1919795 ']' 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.113 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.374 [2024-12-05 21:02:31.564339] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:30.374 [2024-12-05 21:02:31.564396] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.374 [2024-12-05 21:02:31.669039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.374 [2024-12-05 21:02:31.703506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.374 [2024-12-05 21:02:31.703542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.374 [2024-12-05 21:02:31.703550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.374 [2024-12-05 21:02:31.703556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.374 [2024-12-05 21:02:31.703562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.374 [2024-12-05 21:02:31.704142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.374 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.374 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:30.374 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:30.374 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:30.374 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.635 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.635 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.636 [2024-12-05 21:02:31.828936] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.636 Malloc0 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.636 [2024-12-05 21:02:31.869467] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1919840 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1919840 /var/tmp/bdevperf.sock 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1919840 ']' 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:30.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.636 21:02:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:30.636 [2024-12-05 21:02:31.925362] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:09:30.636 [2024-12-05 21:02:31.925425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919840 ] 00:09:30.636 [2024-12-05 21:02:32.007909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.636 [2024-12-05 21:02:32.049907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.577 21:02:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.577 21:02:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:31.577 21:02:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:31.577 21:02:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.577 21:02:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.577 NVMe0n1 00:09:31.577 21:02:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.577 21:02:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:31.578 Running I/O for 10 seconds... 00:09:33.903 9136.00 IOPS, 35.69 MiB/s [2024-12-05T20:02:36.280Z] 9736.00 IOPS, 38.03 MiB/s [2024-12-05T20:02:37.221Z] 10471.33 IOPS, 40.90 MiB/s [2024-12-05T20:02:38.162Z] 10750.00 IOPS, 41.99 MiB/s [2024-12-05T20:02:39.103Z] 10975.00 IOPS, 42.87 MiB/s [2024-12-05T20:02:40.044Z] 11091.17 IOPS, 43.32 MiB/s [2024-12-05T20:02:41.426Z] 11120.57 IOPS, 43.44 MiB/s [2024-12-05T20:02:42.366Z] 11203.00 IOPS, 43.76 MiB/s [2024-12-05T20:02:43.308Z] 11264.00 IOPS, 44.00 MiB/s [2024-12-05T20:02:43.308Z] 11264.70 IOPS, 44.00 MiB/s 00:09:41.871 Latency(us) 00:09:41.871 [2024-12-05T20:02:43.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.871 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:41.871 Verification LBA range: start 0x0 length 0x4000 00:09:41.871 NVMe0n1 : 10.06 11298.19 44.13 0.00 0.00 90320.26 24576.00 66409.81 00:09:41.871 [2024-12-05T20:02:43.308Z] =================================================================================================================== 00:09:41.871 [2024-12-05T20:02:43.308Z] Total : 11298.19 44.13 0.00 0.00 90320.26 24576.00 66409.81 00:09:41.871 { 00:09:41.871 "results": [ 00:09:41.871 { 00:09:41.871 "job": "NVMe0n1", 00:09:41.871 "core_mask": "0x1", 00:09:41.871 "workload": "verify", 00:09:41.871 "status": "finished", 00:09:41.871 "verify_range": { 00:09:41.871 "start": 0, 00:09:41.871 "length": 16384 00:09:41.871 }, 00:09:41.871 "queue_depth": 1024, 00:09:41.871 "io_size": 4096, 00:09:41.871 "runtime": 10.060371, 00:09:41.871 "iops": 11298.191686966615, 00:09:41.871 "mibps": 44.13356127721334, 00:09:41.871 "io_failed": 0, 00:09:41.871 "io_timeout": 0, 00:09:41.871 "avg_latency_us": 90320.2584984985, 00:09:41.871 "min_latency_us": 24576.0, 00:09:41.871 "max_latency_us": 66409.81333333334 00:09:41.871 } 00:09:41.871 ], 00:09:41.871 "core_count": 1 00:09:41.871 } 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1919840 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1919840 ']' 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1919840 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1919840 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1919840' 00:09:41.871 killing process with pid 1919840 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1919840 00:09:41.871 Received shutdown signal, test time was about 10.000000 seconds 00:09:41.871 00:09:41.871 Latency(us) 00:09:41.871 [2024-12-05T20:02:43.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.871 [2024-12-05T20:02:43.308Z] =================================================================================================================== 00:09:41.871 [2024-12-05T20:02:43.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1919840 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:41.871 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.132 rmmod nvme_tcp 00:09:42.132 rmmod nvme_fabrics 00:09:42.132 rmmod nvme_keyring 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1919795 ']' 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1919795 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1919795 ']' 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1919795 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1919795 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1919795' 00:09:42.132 killing process with pid 1919795 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1919795 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1919795 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.132 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.133 21:02:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.676 00:09:44.676 real 0m22.513s 00:09:44.676 user 0m25.173s 00:09:44.676 sys 0m7.483s 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:44.676 ************************************ 00:09:44.676 END TEST nvmf_queue_depth 00:09:44.676 ************************************ 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.676 ************************************ 00:09:44.676 START TEST nvmf_target_multipath 00:09:44.676 ************************************ 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:44.676 * Looking for test storage... 00:09:44.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.676 --rc genhtml_branch_coverage=1 00:09:44.676 --rc genhtml_function_coverage=1 00:09:44.676 --rc genhtml_legend=1 00:09:44.676 --rc geninfo_all_blocks=1 00:09:44.676 --rc geninfo_unexecuted_blocks=1 00:09:44.676 00:09:44.676 ' 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.676 --rc genhtml_branch_coverage=1 00:09:44.676 --rc genhtml_function_coverage=1 00:09:44.676 --rc genhtml_legend=1 00:09:44.676 --rc geninfo_all_blocks=1 00:09:44.676 --rc geninfo_unexecuted_blocks=1 00:09:44.676 00:09:44.676 ' 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.676 --rc genhtml_branch_coverage=1 00:09:44.676 --rc genhtml_function_coverage=1 00:09:44.676 --rc genhtml_legend=1 00:09:44.676 --rc geninfo_all_blocks=1 00:09:44.676 --rc geninfo_unexecuted_blocks=1 00:09:44.676 00:09:44.676 ' 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.676 --rc genhtml_branch_coverage=1 00:09:44.676 --rc genhtml_function_coverage=1 00:09:44.676 --rc genhtml_legend=1 00:09:44.676 --rc geninfo_all_blocks=1 00:09:44.676 --rc geninfo_unexecuted_blocks=1 00:09:44.676 00:09:44.676 ' 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.676 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:44.677 21:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:52.957 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:52.957 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:52.957 Found net devices under 0000:31:00.0: cvl_0_0 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.957 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:52.958 Found net devices under 0000:31:00.1: cvl_0_1 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:52.958 21:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:52.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:09:52.958 00:09:52.958 --- 10.0.0.2 ping statistics --- 00:09:52.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.958 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:09:52.958 00:09:52.958 --- 10.0.0.1 ping statistics --- 00:09:52.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.958 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:52.958 only one NIC for nvmf test 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.958 rmmod nvme_tcp 00:09:52.958 rmmod nvme_fabrics 00:09:52.958 rmmod nvme_keyring 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.958 21:02:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:55.511 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:55.512 00:09:55.512 real 0m10.681s 00:09:55.512 user 0m2.358s 00:09:55.512 sys 0m6.235s 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:55.512 ************************************ 00:09:55.512 END TEST nvmf_target_multipath 00:09:55.512 ************************************ 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.512 ************************************ 00:09:55.512 START TEST nvmf_zcopy 00:09:55.512 ************************************ 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:55.512 * Looking for test storage... 00:09:55.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:55.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.512 --rc genhtml_branch_coverage=1 00:09:55.512 --rc genhtml_function_coverage=1 00:09:55.512 --rc genhtml_legend=1 00:09:55.512 --rc geninfo_all_blocks=1 00:09:55.512 --rc geninfo_unexecuted_blocks=1 00:09:55.512 00:09:55.512 ' 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:55.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.512 --rc genhtml_branch_coverage=1 00:09:55.512 --rc genhtml_function_coverage=1 00:09:55.512 --rc genhtml_legend=1 00:09:55.512 --rc geninfo_all_blocks=1 00:09:55.512 --rc geninfo_unexecuted_blocks=1 00:09:55.512 00:09:55.512 ' 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:55.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.512 --rc genhtml_branch_coverage=1 00:09:55.512 --rc genhtml_function_coverage=1 00:09:55.512 --rc genhtml_legend=1 00:09:55.512 --rc geninfo_all_blocks=1 00:09:55.512 --rc geninfo_unexecuted_blocks=1 00:09:55.512 00:09:55.512 ' 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:55.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.512 --rc genhtml_branch_coverage=1 00:09:55.512 --rc genhtml_function_coverage=1 00:09:55.512 --rc genhtml_legend=1 00:09:55.512 --rc geninfo_all_blocks=1 00:09:55.512 --rc geninfo_unexecuted_blocks=1 00:09:55.512 00:09:55.512 ' 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.512 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.513 21:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:03.657 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:03.657 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.657 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:03.658 Found net devices under 0000:31:00.0: cvl_0_0 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:03.658 Found net devices under 0000:31:00.1: cvl_0_1 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.658 21:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:03.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:10:03.658 00:10:03.658 --- 10.0.0.2 ping statistics --- 00:10:03.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.658 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:10:03.658 00:10:03.658 --- 10.0.0.1 ping statistics --- 00:10:03.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.658 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1931976 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1931976 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1931976 ']' 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.658 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.919 [2024-12-05 21:03:05.113069] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:10:03.919 [2024-12-05 21:03:05.113117] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.919 [2024-12-05 21:03:05.218691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.919 [2024-12-05 21:03:05.253408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.919 [2024-12-05 21:03:05.253445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.919 [2024-12-05 21:03:05.253453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.919 [2024-12-05 21:03:05.253460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.919 [2024-12-05 21:03:05.253465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.919 [2024-12-05 21:03:05.254060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.491 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.491 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:04.491 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.491 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.491 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.752 [2024-12-05 21:03:05.949899] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.752 [2024-12-05 21:03:05.970205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.752 malloc0 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.752 21:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.752 21:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.752 21:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:04.752 21:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:04.752 21:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:04.752 21:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:04.752 21:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:04.752 21:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:04.752 { 00:10:04.752 "params": { 00:10:04.752 "name": "Nvme$subsystem", 00:10:04.752 "trtype": "$TEST_TRANSPORT", 00:10:04.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.752 "adrfam": "ipv4", 00:10:04.752 "trsvcid": "$NVMF_PORT", 00:10:04.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.752 "hdgst": ${hdgst:-false}, 00:10:04.752 "ddgst": ${ddgst:-false} 00:10:04.752 }, 00:10:04.752 "method": "bdev_nvme_attach_controller" 00:10:04.752 } 00:10:04.752 EOF 00:10:04.752 )") 00:10:04.752 21:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:04.752 21:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:04.752 21:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:04.752 21:03:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:04.752 "params": { 00:10:04.752 "name": "Nvme1", 00:10:04.752 "trtype": "tcp", 00:10:04.752 "traddr": "10.0.0.2", 00:10:04.752 "adrfam": "ipv4", 00:10:04.752 "trsvcid": "4420", 00:10:04.752 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.752 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.752 "hdgst": false, 00:10:04.752 "ddgst": false 00:10:04.752 }, 00:10:04.752 "method": "bdev_nvme_attach_controller" 00:10:04.752 }' 00:10:04.752 [2024-12-05 21:03:06.060882] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:10:04.752 [2024-12-05 21:03:06.060946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1932018 ] 00:10:04.752 [2024-12-05 21:03:06.144045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.752 [2024-12-05 21:03:06.185819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.013 Running I/O for 10 seconds... 00:10:06.922 6716.00 IOPS, 52.47 MiB/s [2024-12-05T20:03:09.742Z] 7118.00 IOPS, 55.61 MiB/s [2024-12-05T20:03:10.683Z] 8032.00 IOPS, 62.75 MiB/s [2024-12-05T20:03:11.624Z] 8473.00 IOPS, 66.20 MiB/s [2024-12-05T20:03:12.566Z] 8750.80 IOPS, 68.37 MiB/s [2024-12-05T20:03:13.510Z] 8934.17 IOPS, 69.80 MiB/s [2024-12-05T20:03:14.452Z] 9065.86 IOPS, 70.83 MiB/s [2024-12-05T20:03:15.394Z] 9164.12 IOPS, 71.59 MiB/s [2024-12-05T20:03:16.779Z] 9242.00 IOPS, 72.20 MiB/s [2024-12-05T20:03:16.779Z] 9302.90 IOPS, 72.68 MiB/s 00:10:15.342 Latency(us) 00:10:15.342 [2024-12-05T20:03:16.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.342 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:15.342 Verification LBA range: start 0x0 length 0x1000 00:10:15.342 Nvme1n1 : 10.01 9304.23 72.69 0.00 0.00 13705.75 1706.67 26760.53 00:10:15.342 [2024-12-05T20:03:16.779Z] =================================================================================================================== 00:10:15.342 [2024-12-05T20:03:16.779Z] Total : 9304.23 72.69 0.00 0.00 13705.75 1706.67 26760.53 00:10:15.342 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1934628 00:10:15.342 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:15.342 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.342 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:15.342 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:15.342 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:15.342 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:15.342 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:15.342 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:15.342 { 00:10:15.342 "params": { 00:10:15.342 "name": "Nvme$subsystem", 00:10:15.342 "trtype": "$TEST_TRANSPORT", 00:10:15.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.342 "adrfam": "ipv4", 00:10:15.342 "trsvcid": "$NVMF_PORT", 00:10:15.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.342 "hdgst": ${hdgst:-false}, 00:10:15.342 "ddgst": ${ddgst:-false} 00:10:15.342 }, 00:10:15.342 "method": "bdev_nvme_attach_controller" 00:10:15.342 } 00:10:15.342 EOF 00:10:15.342 )") 00:10:15.342 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:15.342 [2024-12-05 21:03:16.487724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.342 [2024-12-05 21:03:16.487755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.342 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:15.342 [2024-12-05 21:03:16.495706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.342 [2024-12-05 21:03:16.495715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.342 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:15.343 21:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:15.343 "params": { 00:10:15.343 "name": "Nvme1", 00:10:15.343 "trtype": "tcp", 00:10:15.343 "traddr": "10.0.0.2", 00:10:15.343 "adrfam": "ipv4", 00:10:15.343 "trsvcid": "4420", 00:10:15.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.343 "hdgst": false, 00:10:15.343 "ddgst": false 00:10:15.343 }, 00:10:15.343 "method": "bdev_nvme_attach_controller" 00:10:15.343 }' 00:10:15.343 [2024-12-05 21:03:16.503725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.503733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.511745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.511758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.519765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.519772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.528906] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:10:15.343 [2024-12-05 21:03:16.528953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1934628 ] 00:10:15.343 [2024-12-05 21:03:16.531793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.531801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.539813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.539821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.547834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.547841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.555853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.555921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.563878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.563886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.571898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.571906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.579915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.579922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.587935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.587942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.595955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.595962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.603976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.603983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.605083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.343 [2024-12-05 21:03:16.611998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.612007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.620019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.620028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.628040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.628049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.636060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.636069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.640552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.343 [2024-12-05 21:03:16.644080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.644092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.652103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.652111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.660127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.660138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.668144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.668155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.676166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.676177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.684185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.684193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.692205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.692213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.700226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.700233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.708245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.708252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.716281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.716299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.724291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.724301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.732312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.732322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.744344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.744354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.752365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.752372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.760386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.760393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.768405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.768413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.343 [2024-12-05 21:03:16.776427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.343 [2024-12-05 21:03:16.776434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.784448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.784458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.792469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.792478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.800490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.800500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.808512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.808519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.816534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.816540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.824554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.824561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.832576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.832584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.840599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.840607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.848620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.848628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.856642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.856650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.864662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.864670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.872681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.872688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.880704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.880714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.888723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.888730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.896743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.896749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.904763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.904770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.912783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.912790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.920805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.920813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.928825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.928831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.973335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.973350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.980966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.980975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 Running I/O for 5 seconds... 00:10:15.605 [2024-12-05 21:03:16.988982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.988989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:16.999513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:16.999530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:17.007767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:17.007783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:17.016876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:17.016892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:17.025896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:17.025911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.605 [2024-12-05 21:03:17.034858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.605 [2024-12-05 21:03:17.034877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-12-05 21:03:17.043737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-12-05 21:03:17.043753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-12-05 21:03:17.057239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.866 [2024-12-05 21:03:17.057254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.866 [2024-12-05 21:03:17.070766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.070781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.083575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.083590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.096728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.096743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.109229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.109244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.121999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.122014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.135462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.135479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.149186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.149201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.162645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.162659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.175606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.175621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.188194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.188209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.201143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.201157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.214268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.214282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.226962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.226977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.239978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.239993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.252536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.252551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.264802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.264817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.277766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.277782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.867 [2024-12-05 21:03:17.290640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.867 [2024-12-05 21:03:17.290655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.304171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.304186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.316665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.316680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.329735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.329750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.342987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.343003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.356626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.356641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.369110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.369125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.382449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.382464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.395783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.395799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.408940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.408955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.421285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.421300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.433989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.434004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.446999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.447014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.460507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.460522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.473772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.473787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.487026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.487041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.499472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.499486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.512827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.512841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.525623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.525638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.538748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.538762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.128 [2024-12-05 21:03:17.552302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.128 [2024-12-05 21:03:17.552317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.389 [2024-12-05 21:03:17.565222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.389 [2024-12-05 21:03:17.565237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.389 [2024-12-05 21:03:17.578580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.389 [2024-12-05 21:03:17.578594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.389 [2024-12-05 21:03:17.591848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.389 [2024-12-05 21:03:17.591868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.389 [2024-12-05 21:03:17.605182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.389 [2024-12-05 21:03:17.605197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.389 [2024-12-05 21:03:17.618046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.389 [2024-12-05 21:03:17.618060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.390 [2024-12-05 21:03:17.631553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.390 [2024-12-05 21:03:17.631567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.390 [2024-12-05 21:03:17.645225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.390 [2024-12-05 21:03:17.645240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.390 [2024-12-05 21:03:17.658515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.390 [2024-12-05 21:03:17.658530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.390 [2024-12-05 21:03:17.671901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.390 [2024-12-05 21:03:17.671916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.390 [2024-12-05 21:03:17.685428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.390 [2024-12-05 21:03:17.685443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.390 [2024-12-05 21:03:17.698198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.390 [2024-12-05 21:03:17.698213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.390 [2024-12-05 21:03:17.711573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.390 [2024-12-05 21:03:17.711588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.390 [2024-12-05 21:03:17.724531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.390 [2024-12-05 21:03:17.724546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.390 [2024-12-05 21:03:17.737803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.390 [2024-12-05 21:03:17.737817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.390 [2024-12-05 21:03:17.751091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.390 [2024-12-05 21:03:17.751106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.390 [2024-12-05 21:03:17.764372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.390 [2024-12-05 21:03:17.764387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.390 [2024-12-05 21:03:17.777710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.390 [2024-12-05 21:03:17.777725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.390 [2024-12-05 21:03:17.790913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.390 [2024-12-05 21:03:17.790928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.390 [2024-12-05 21:03:17.803763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.390 [2024-12-05 21:03:17.803777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.390 [2024-12-05 21:03:17.817351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.390 [2024-12-05 21:03:17.817366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:17.830063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:17.830080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:17.842389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:17.842403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:17.856094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:17.856110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:17.869450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:17.869465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:17.882438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:17.882453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:17.895493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:17.895508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:17.908460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:17.908476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:17.921417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:17.921432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:17.934265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:17.934280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:17.946812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:17.946833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:17.959454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:17.959469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:17.973069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:17.973084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:17.986513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:17.986527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 19280.00 IOPS, 150.62 MiB/s [2024-12-05T20:03:18.088Z] [2024-12-05 21:03:17.999754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:17.999770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:18.013354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:18.013370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:18.026824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:18.026840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:18.040088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:18.040103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:18.053645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:18.053660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:18.066780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:18.066794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.651 [2024-12-05 21:03:18.079561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.651 [2024-12-05 21:03:18.079577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.092839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.092855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.106505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.106520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.118894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.118909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.132083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.132097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.144882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.144897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.157212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.157227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.170284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.170299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.183051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.183066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.196011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.196030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.209217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.209233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.222096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.222111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.235092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.235107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.247971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.247987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.261372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.261387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.274354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.274370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.287747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.287762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.300949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.300965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.314293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.314307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.326873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.326889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.912 [2024-12-05 21:03:18.340238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.912 [2024-12-05 21:03:18.340253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.353015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.353030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.365890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.365905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.378428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.378444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.391367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.391382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.404371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.404386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.417681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.417696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.430970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.430986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.444286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.444306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.457621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.457637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.470904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.470920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.484019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.484034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.497272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.497286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.510596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.510611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.523843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.523857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.536757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.536772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.549573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.549588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.563166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.563181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.576601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.576617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.589971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.589986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.173 [2024-12-05 21:03:18.602696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.173 [2024-12-05 21:03:18.602710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.615659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.615674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.628739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.628754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.642088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.642102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.655077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.655091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.667707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.667722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.680441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.680456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.692898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.692912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.705901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.705915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.718147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.718162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.731711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.731726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.744870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.744885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.758406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.758420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.770997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.771011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.783963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.783978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.797353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.797368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.434 [2024-12-05 21:03:18.809917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.434 [2024-12-05 21:03:18.809932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.435 [2024-12-05 21:03:18.822755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.435 [2024-12-05 21:03:18.822770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.435 [2024-12-05 21:03:18.836227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.435 [2024-12-05 21:03:18.836241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.435 [2024-12-05 21:03:18.848789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.435 [2024-12-05 21:03:18.848804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.435 [2024-12-05 21:03:18.861193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.435 [2024-12-05 21:03:18.861208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.696 [2024-12-05 21:03:18.874324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.696 [2024-12-05 21:03:18.874339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.696 [2024-12-05 21:03:18.887068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.696 [2024-12-05 21:03:18.887082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.696 [2024-12-05 21:03:18.900493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.696 [2024-12-05 21:03:18.900508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.696 [2024-12-05 21:03:18.913491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.696 [2024-12-05 21:03:18.913506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.696 [2024-12-05 21:03:18.926681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.696 [2024-12-05 21:03:18.926696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.696 [2024-12-05 21:03:18.939705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.696 [2024-12-05 21:03:18.939720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.696 [2024-12-05 21:03:18.952768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.696 [2024-12-05 21:03:18.952783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.696 [2024-12-05 21:03:18.965866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.696 [2024-12-05 21:03:18.965880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.696 [2024-12-05 21:03:18.978754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.696 [2024-12-05 21:03:18.978769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.696 [2024-12-05 21:03:18.991479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.697 [2024-12-05 21:03:18.991494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.697 19370.00 IOPS, 151.33 MiB/s [2024-12-05T20:03:19.134Z] [2024-12-05 21:03:19.004485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.697 [2024-12-05 21:03:19.004500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.697 [2024-12-05 21:03:19.017856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.697 [2024-12-05 21:03:19.017874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.697 [2024-12-05 21:03:19.030713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.697 [2024-12-05 21:03:19.030728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.697 [2024-12-05 21:03:19.043680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.697 [2024-12-05 21:03:19.043694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.697 [2024-12-05 21:03:19.056337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.697 [2024-12-05 21:03:19.056352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.697 [2024-12-05 21:03:19.069554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.697 [2024-12-05 21:03:19.069569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.697 [2024-12-05 21:03:19.082226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.697 [2024-12-05 21:03:19.082241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.697 [2024-12-05 21:03:19.094990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.697 [2024-12-05 21:03:19.095005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.697 [2024-12-05 21:03:19.108169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.697 [2024-12-05 21:03:19.108184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.697 [2024-12-05 21:03:19.120941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.697 [2024-12-05 21:03:19.120956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.134016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.134031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.146517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.146532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.158975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.158989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.172209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.172224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.185664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.185679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.198271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.198286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.211753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.211768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.225102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.225116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.238459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.238474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.251856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.251873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.265117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.265132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.278219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.278235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.291330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.291346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.304852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.304872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.317476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.317492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.330481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.330496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.343606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.343621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.356403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.356418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.369646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.369660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.958 [2024-12-05 21:03:19.382912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.958 [2024-12-05 21:03:19.382926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.395693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.395708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.409004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.409019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.421741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.421760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.434451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.434466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.447783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.447798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.460197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.460212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.472750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.472766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.486510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.486525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.499707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.499723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.513259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.513274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.526347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.526362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.539843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.539858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.553322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.553337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.566213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.566229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.579636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.579652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.593247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.593262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.606325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.606341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.619866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.619881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.633346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.633361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.219 [2024-12-05 21:03:19.646687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.219 [2024-12-05 21:03:19.646702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.659474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.659490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.672418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.672437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.684358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.684373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.696848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.696867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.710219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.710234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.723383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.723398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.736810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.736826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.750107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.750122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.762794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.762809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.775933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.775948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.789156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.789172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.802583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.802599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.815762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.815777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.829166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.829181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.842580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.842595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.856270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.856285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.869257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.869272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.882418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.882433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.895854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.895874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.481 [2024-12-05 21:03:19.909277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.481 [2024-12-05 21:03:19.909292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.742 [2024-12-05 21:03:19.922584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.742 [2024-12-05 21:03:19.922604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.742 [2024-12-05 21:03:19.935601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.742 [2024-12-05 21:03:19.935617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.742 [2024-12-05 21:03:19.949111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.742 [2024-12-05 21:03:19.949126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.742 [2024-12-05 21:03:19.961782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.742 [2024-12-05 21:03:19.961797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.742 [2024-12-05 21:03:19.974470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.742 [2024-12-05 21:03:19.974485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.742 [2024-12-05 21:03:19.988195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.742 [2024-12-05 21:03:19.988210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.742 19404.00 IOPS, 151.59 MiB/s [2024-12-05T20:03:20.179Z] [2024-12-05 21:03:20.001387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.742 [2024-12-05 21:03:20.001402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.742 [2024-12-05 21:03:20.015500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.742 [2024-12-05 21:03:20.015518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.742 [2024-12-05 21:03:20.028469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.742 [2024-12-05 21:03:20.028485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.742 [2024-12-05 21:03:20.041986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.742 [2024-12-05 21:03:20.042008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.742 [2024-12-05 21:03:20.055685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.742 [2024-12-05 21:03:20.055704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.742 [2024-12-05 21:03:20.063556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.742 [2024-12-05 21:03:20.063572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.742 [2024-12-05 21:03:20.072465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.743 [2024-12-05 21:03:20.072480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.743 [2024-12-05 21:03:20.082001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.743 [2024-12-05 21:03:20.082017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.743 [2024-12-05 21:03:20.090776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.743 [2024-12-05 21:03:20.090791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.743 [2024-12-05 21:03:20.099329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.743 [2024-12-05 21:03:20.099344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.743 [2024-12-05 21:03:20.108189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.743 [2024-12-05 21:03:20.108204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.743 [2024-12-05 21:03:20.117128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.743 [2024-12-05 21:03:20.117143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.743 [2024-12-05 21:03:20.125925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.743 [2024-12-05 21:03:20.125940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.743 [2024-12-05 21:03:20.134929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.743 [2024-12-05 21:03:20.134944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.743 [2024-12-05 21:03:20.148405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.743 [2024-12-05 21:03:20.148420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.743 [2024-12-05 21:03:20.161791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.743 [2024-12-05 21:03:20.161807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.743 [2024-12-05 21:03:20.174977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.743 [2024-12-05 21:03:20.174992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.188181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.188196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.201412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.201427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.214637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.214652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.227209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.227223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.239745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.239760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.253276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.253290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.266659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.266674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.279639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.279654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.293078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.293092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.305888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.305903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.318473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.318488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.331826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.331841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.345132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.345147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.357617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.357632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.370810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.370825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.383726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.383741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.396732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.396747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.410162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.003 [2024-12-05 21:03:20.410177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.003 [2024-12-05 21:03:20.423161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.004 [2024-12-05 21:03:20.423176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.004 [2024-12-05 21:03:20.436182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.004 [2024-12-05 21:03:20.436197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.264 [2024-12-05 21:03:20.448946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.264 [2024-12-05 21:03:20.448961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.264 [2024-12-05 21:03:20.461264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.264 [2024-12-05 21:03:20.461278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.264 [2024-12-05 21:03:20.474891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.264 [2024-12-05 21:03:20.474906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.264 [2024-12-05 21:03:20.487302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.264 [2024-12-05 21:03:20.487316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.264 [2024-12-05 21:03:20.500277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.264 [2024-12-05 21:03:20.500292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.264 [2024-12-05 21:03:20.513562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.264 [2024-12-05 21:03:20.513576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.264 [2024-12-05 21:03:20.526736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.264 [2024-12-05 21:03:20.526751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.264 [2024-12-05 21:03:20.539230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.264 [2024-12-05 21:03:20.539244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.264 [2024-12-05 21:03:20.552279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.264 [2024-12-05 21:03:20.552294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.264 [2024-12-05 21:03:20.565130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.264 [2024-12-05 21:03:20.565145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.264 [2024-12-05 21:03:20.578390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.265 [2024-12-05 21:03:20.578405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.265 [2024-12-05 21:03:20.591622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.265 [2024-12-05 21:03:20.591637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.265 [2024-12-05 21:03:20.605090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.265 [2024-12-05 21:03:20.605104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.265 [2024-12-05 21:03:20.617699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.265 [2024-12-05 21:03:20.617714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.265 [2024-12-05 21:03:20.630513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.265 [2024-12-05 21:03:20.630528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.265 [2024-12-05 21:03:20.643024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.265 [2024-12-05 21:03:20.643039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.265 [2024-12-05 21:03:20.655510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.265 [2024-12-05 21:03:20.655524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.265 [2024-12-05 21:03:20.668812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.265 [2024-12-05 21:03:20.668827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.265 [2024-12-05 21:03:20.682350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.265 [2024-12-05 21:03:20.682364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.265 [2024-12-05 21:03:20.695262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.265 [2024-12-05 21:03:20.695277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.529 [2024-12-05 21:03:20.708677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.529 [2024-12-05 21:03:20.708692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.529 [2024-12-05 21:03:20.721523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.529 [2024-12-05 21:03:20.721538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.529 [2024-12-05 21:03:20.734680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.529 [2024-12-05 21:03:20.734695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.529 [2024-12-05 21:03:20.747538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.529 [2024-12-05 21:03:20.747553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.529 [2024-12-05 21:03:20.761117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.529 [2024-12-05 21:03:20.761131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.529 [2024-12-05 21:03:20.773611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.529 [2024-12-05 21:03:20.773625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.529 [2024-12-05 21:03:20.786799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.529 [2024-12-05 21:03:20.786813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.529 [2024-12-05 21:03:20.800291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.529 [2024-12-05 21:03:20.800305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.529 [2024-12-05 21:03:20.813310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.529 [2024-12-05 21:03:20.813325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.529 [2024-12-05 21:03:20.826511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.529 [2024-12-05 21:03:20.826525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.529 [2024-12-05 21:03:20.839411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.530 [2024-12-05 21:03:20.839426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.530 [2024-12-05 21:03:20.852071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.530 [2024-12-05 21:03:20.852085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.530 [2024-12-05 21:03:20.865442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.530 [2024-12-05 21:03:20.865462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.530 [2024-12-05 21:03:20.877891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.530 [2024-12-05 21:03:20.877905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.530 [2024-12-05 21:03:20.891062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.530 [2024-12-05 21:03:20.891077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.530 [2024-12-05 21:03:20.904125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.530 [2024-12-05 21:03:20.904140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.530 [2024-12-05 21:03:20.917582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.530 [2024-12-05 21:03:20.917598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.530 [2024-12-05 21:03:20.930409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.530 [2024-12-05 21:03:20.930423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.530 [2024-12-05 21:03:20.942735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.530 [2024-12-05 21:03:20.942749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.530 [2024-12-05 21:03:20.955275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.530 [2024-12-05 21:03:20.955289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.791 [2024-12-05 21:03:20.968387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.791 [2024-12-05 21:03:20.968403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.791 [2024-12-05 21:03:20.981857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.791 [2024-12-05 21:03:20.981875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.791 [2024-12-05 21:03:20.994457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.791 [2024-12-05 21:03:20.994472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.791 19394.75 IOPS, 151.52 MiB/s [2024-12-05T20:03:21.229Z] [2024-12-05 21:03:21.007703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.007717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.021165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.021180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.034324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.034339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.047033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.047047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.060304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.060318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.073809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.073824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.086512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.086526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.099530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.099545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.112916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.112935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.126225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.126240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.138724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.138739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.150731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.150746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.163066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.163081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.175800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.175815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.188252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.188267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.200611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.200627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.213318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.213333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.792 [2024-12-05 21:03:21.226753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.792 [2024-12-05 21:03:21.226768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.240227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.240243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.253172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.253187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.266386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.266401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.279870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.279885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.292204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.292219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.305731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.305747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.319053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.319068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.332407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.332422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.345700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.345715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.358782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.358802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.371926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.371941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.384782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.384797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.398303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.398319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.411386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.411402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.424705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.424721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.438083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.438098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.451494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.451509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.464128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.464143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-12-05 21:03:21.477415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-12-05 21:03:21.477431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.490341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.490357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.503918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.503933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.516875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.516890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.530060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.530075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.542684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.542699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.555564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.555579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.569007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.569022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.582786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.582801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.595323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.595337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.608982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.608997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.621770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.621785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.634273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.634288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.647310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.647325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.660201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.660216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.673695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.673710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.687357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.687371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.701078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.701093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.713806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.713821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.726354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.726369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-12-05 21:03:21.738778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-12-05 21:03:21.738793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.752096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.752112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.765292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.765307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.778518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.778534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.791838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.791853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.805556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.805572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.819196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.819212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.832279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.832294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.845856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.845875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.859496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.859512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.872063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.872078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.884962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.884977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.898219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.898234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.911240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.911254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.923832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.923847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.936441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.936456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.949297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.949312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.962728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.962743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.975084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.571 [2024-12-05 21:03:21.975099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.571 [2024-12-05 21:03:21.988273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.572 [2024-12-05 21:03:21.988288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.572 19399.80 IOPS, 151.56 MiB/s [2024-12-05T20:03:22.009Z] [2024-12-05 21:03:22.000843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.572 [2024-12-05 21:03:22.000858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.572 00:10:20.572 Latency(us) 00:10:20.572 [2024-12-05T20:03:22.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.572 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:20.572 Nvme1n1 : 5.01 19402.02 151.58 0.00 0.00 6590.58 2771.63 18677.76 00:10:20.572 [2024-12-05T20:03:22.009Z] =================================================================================================================== 00:10:20.572 [2024-12-05T20:03:22.009Z] Total : 19402.02 151.58 0.00 0.00 6590.58 2771.63 18677.76 00:10:20.831 [2024-12-05 21:03:22.010388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.831 [2024-12-05 21:03:22.010402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.831 [2024-12-05 21:03:22.022420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.831 [2024-12-05 21:03:22.022432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.831 [2024-12-05 21:03:22.034450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.831 [2024-12-05 21:03:22.034461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.831 [2024-12-05 21:03:22.046478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.831 [2024-12-05 21:03:22.046494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.831 [2024-12-05 21:03:22.058514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.831 [2024-12-05 21:03:22.058524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.831 [2024-12-05 21:03:22.070536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.831 [2024-12-05 21:03:22.070546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.831 [2024-12-05 21:03:22.082566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.831 [2024-12-05 21:03:22.082574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.831 [2024-12-05 21:03:22.094599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.831 [2024-12-05 21:03:22.094609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.831 [2024-12-05 21:03:22.106629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.831 [2024-12-05 21:03:22.106639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.831 [2024-12-05 21:03:22.118658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.831 [2024-12-05 21:03:22.118665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1934628) - No such process 00:10:20.831 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1934628 00:10:20.831 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.831 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.832 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.832 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.832 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:20.832 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.832 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.832 delay0 00:10:20.832 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.832 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:20.832 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.832 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.832 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.832 21:03:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:20.832 [2024-12-05 21:03:22.236557] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:28.959 Initializing NVMe Controllers 00:10:28.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:28.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:28.959 Initialization complete. Launching workers. 00:10:28.959 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 31124 00:10:28.959 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 31245, failed to submit 116 00:10:28.959 success 31152, unsuccessful 93, failed 0 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:28.959 rmmod nvme_tcp 00:10:28.959 rmmod nvme_fabrics 00:10:28.959 rmmod nvme_keyring 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1931976 ']' 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1931976 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1931976 ']' 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1931976 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1931976 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1931976' 00:10:28.959 killing process with pid 1931976 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1931976 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1931976 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.959 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.960 21:03:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.340 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:30.340 00:10:30.340 real 0m35.256s 00:10:30.340 user 0m45.897s 00:10:30.340 sys 0m12.208s 00:10:30.340 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.340 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.340 ************************************ 00:10:30.340 END TEST nvmf_zcopy 00:10:30.340 ************************************ 00:10:30.340 21:03:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:30.340 21:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.340 21:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.340 21:03:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.599 ************************************ 00:10:30.599 START TEST nvmf_nmic 00:10:30.599 ************************************ 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:30.600 * Looking for test storage... 00:10:30.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.600 21:03:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:30.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.600 --rc genhtml_branch_coverage=1 00:10:30.600 --rc genhtml_function_coverage=1 00:10:30.600 --rc genhtml_legend=1 00:10:30.600 --rc geninfo_all_blocks=1 00:10:30.600 --rc geninfo_unexecuted_blocks=1 00:10:30.600 00:10:30.600 ' 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:30.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.600 --rc genhtml_branch_coverage=1 00:10:30.600 --rc genhtml_function_coverage=1 00:10:30.600 --rc genhtml_legend=1 00:10:30.600 --rc geninfo_all_blocks=1 00:10:30.600 --rc geninfo_unexecuted_blocks=1 00:10:30.600 00:10:30.600 ' 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:30.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.600 --rc genhtml_branch_coverage=1 00:10:30.600 --rc genhtml_function_coverage=1 00:10:30.600 --rc genhtml_legend=1 00:10:30.600 --rc geninfo_all_blocks=1 00:10:30.600 --rc geninfo_unexecuted_blocks=1 00:10:30.600 00:10:30.600 ' 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:30.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.600 --rc genhtml_branch_coverage=1 00:10:30.600 --rc genhtml_function_coverage=1 00:10:30.600 --rc genhtml_legend=1 00:10:30.600 --rc geninfo_all_blocks=1 00:10:30.600 --rc geninfo_unexecuted_blocks=1 00:10:30.600 00:10:30.600 ' 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.600 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.861 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.861 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.861 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:30.861 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:30.861 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.861 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.861 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.861 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.861 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.861 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.861 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.861 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.861 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.861 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.861 21:03:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:39.003 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:39.003 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.003 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:39.004 Found net devices under 0000:31:00.0: cvl_0_0 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:39.004 Found net devices under 0000:31:00.1: cvl_0_1 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.004 21:03:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:39.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:10:39.004 00:10:39.004 --- 10.0.0.2 ping statistics --- 00:10:39.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.004 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:10:39.004 00:10:39.004 --- 10.0.0.1 ping statistics --- 00:10:39.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.004 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1941994 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1941994 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1941994 ']' 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.004 21:03:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.004 [2024-12-05 21:03:40.400650] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:10:39.004 [2024-12-05 21:03:40.400717] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.266 [2024-12-05 21:03:40.495568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.266 [2024-12-05 21:03:40.540115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.266 [2024-12-05 21:03:40.540152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.266 [2024-12-05 21:03:40.540161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.266 [2024-12-05 21:03:40.540170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.266 [2024-12-05 21:03:40.540176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.266 [2024-12-05 21:03:40.541900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.266 [2024-12-05 21:03:40.542095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.266 [2024-12-05 21:03:40.542226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.266 [2024-12-05 21:03:40.542226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.836 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.836 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:39.836 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.836 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.836 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.836 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.836 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.836 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.836 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.836 [2024-12-05 21:03:41.263386] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.102 Malloc0 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.102 [2024-12-05 21:03:41.335330] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.102 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:40.102 test case1: single bdev can't be used in multiple subsystems 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.103 [2024-12-05 21:03:41.371249] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:40.103 [2024-12-05 21:03:41.371268] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:40.103 [2024-12-05 21:03:41.371276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.103 request: 00:10:40.103 { 00:10:40.103 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:40.103 "namespace": { 00:10:40.103 "bdev_name": "Malloc0", 00:10:40.103 "no_auto_visible": false, 00:10:40.103 "hide_metadata": false 00:10:40.103 }, 00:10:40.103 "method": "nvmf_subsystem_add_ns", 00:10:40.103 "req_id": 1 00:10:40.103 } 00:10:40.103 Got JSON-RPC error response 00:10:40.103 response: 00:10:40.103 { 00:10:40.103 "code": -32602, 00:10:40.103 "message": "Invalid parameters" 00:10:40.103 } 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:40.103 Adding namespace failed - expected result. 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:40.103 test case2: host connect to nvmf target in multiple paths 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.103 [2024-12-05 21:03:41.383409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.103 21:03:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:42.016 21:03:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:43.400 21:03:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:43.400 21:03:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:43.400 21:03:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:43.400 21:03:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:43.400 21:03:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:45.336 21:03:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:45.336 21:03:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:45.336 21:03:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:45.336 21:03:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:45.336 21:03:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:45.336 21:03:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:45.336 21:03:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:45.336 [global] 00:10:45.336 thread=1 00:10:45.336 invalidate=1 00:10:45.336 rw=write 00:10:45.336 time_based=1 00:10:45.336 runtime=1 00:10:45.337 ioengine=libaio 00:10:45.337 direct=1 00:10:45.337 bs=4096 00:10:45.337 iodepth=1 00:10:45.337 norandommap=0 00:10:45.337 numjobs=1 00:10:45.337 00:10:45.337 verify_dump=1 00:10:45.337 verify_backlog=512 00:10:45.337 verify_state_save=0 00:10:45.337 do_verify=1 00:10:45.337 verify=crc32c-intel 00:10:45.337 [job0] 00:10:45.337 filename=/dev/nvme0n1 00:10:45.337 Could not set queue depth (nvme0n1) 00:10:45.602 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.602 fio-3.35 00:10:45.602 Starting 1 thread 00:10:46.984 00:10:46.984 job0: (groupid=0, jobs=1): err= 0: pid=1943396: Thu Dec 5 21:03:48 2024 00:10:46.984 read: IOPS=18, BW=73.0KiB/s (74.8kB/s)(76.0KiB/1041msec) 00:10:46.984 slat (nsec): min=25485, max=26254, avg=25754.89, stdev=232.84 00:10:46.984 clat (usec): min=40912, max=41310, avg=40980.25, stdev=86.64 00:10:46.984 lat (usec): min=40938, max=41336, avg=41006.00, stdev=86.72 00:10:46.984 clat percentiles (usec): 00:10:46.984 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:46.984 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:46.984 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:46.984 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:46.984 | 99.99th=[41157] 00:10:46.984 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:10:46.984 slat (nsec): min=9569, max=54488, avg=29815.30, stdev=8709.13 00:10:46.984 clat (usec): min=201, max=815, avg=473.64, stdev=114.24 00:10:46.984 lat (usec): min=232, max=849, avg=503.45, stdev=115.04 00:10:46.984 clat percentiles (usec): 00:10:46.984 | 1.00th=[ 223], 5.00th=[ 302], 10.00th=[ 318], 20.00th=[ 359], 00:10:46.984 | 30.00th=[ 408], 40.00th=[ 433], 50.00th=[ 494], 60.00th=[ 515], 00:10:46.984 | 70.00th=[ 545], 80.00th=[ 586], 90.00th=[ 619], 95.00th=[ 644], 00:10:46.984 | 99.00th=[ 701], 99.50th=[ 734], 99.90th=[ 816], 99.95th=[ 816], 00:10:46.984 | 99.99th=[ 816] 00:10:46.984 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:46.984 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:46.984 lat (usec) : 250=1.69%, 500=48.96%, 750=45.57%, 1000=0.19% 00:10:46.984 lat (msec) : 50=3.58% 00:10:46.984 cpu : usr=0.87%, sys=1.25%, ctx=531, majf=0, minf=1 00:10:46.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.984 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.984 00:10:46.984 Run status group 0 (all jobs): 00:10:46.984 READ: bw=73.0KiB/s (74.8kB/s), 73.0KiB/s-73.0KiB/s (74.8kB/s-74.8kB/s), io=76.0KiB (77.8kB), run=1041-1041msec 00:10:46.984 WRITE: bw=1967KiB/s (2015kB/s), 1967KiB/s-1967KiB/s (2015kB/s-2015kB/s), io=2048KiB (2097kB), run=1041-1041msec 00:10:46.984 00:10:46.984 Disk stats (read/write): 00:10:46.984 nvme0n1: ios=65/512, merge=0/0, ticks=643/221, in_queue=864, util=92.79% 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:46.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.984 rmmod nvme_tcp 00:10:46.984 rmmod nvme_fabrics 00:10:46.984 rmmod nvme_keyring 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1941994 ']' 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1941994 00:10:46.984 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1941994 ']' 00:10:46.985 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1941994 00:10:46.985 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:46.985 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.985 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1941994 00:10:46.985 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.985 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.985 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1941994' 00:10:46.985 killing process with pid 1941994 00:10:46.985 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1941994 00:10:46.985 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1941994 00:10:47.244 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.244 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.244 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.244 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:47.244 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:47.244 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.244 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.244 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.244 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:47.244 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.244 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.244 21:03:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.154 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:49.154 00:10:49.154 real 0m18.757s 00:10:49.154 user 0m50.242s 00:10:49.154 sys 0m7.098s 00:10:49.154 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.154 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:49.154 ************************************ 00:10:49.154 END TEST nvmf_nmic 00:10:49.154 ************************************ 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:49.465 ************************************ 00:10:49.465 START TEST nvmf_fio_target 00:10:49.465 ************************************ 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:49.465 * Looking for test storage... 00:10:49.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:49.465 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:49.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.466 --rc genhtml_branch_coverage=1 00:10:49.466 --rc genhtml_function_coverage=1 00:10:49.466 --rc genhtml_legend=1 00:10:49.466 --rc geninfo_all_blocks=1 00:10:49.466 --rc geninfo_unexecuted_blocks=1 00:10:49.466 00:10:49.466 ' 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:49.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.466 --rc genhtml_branch_coverage=1 00:10:49.466 --rc genhtml_function_coverage=1 00:10:49.466 --rc genhtml_legend=1 00:10:49.466 --rc geninfo_all_blocks=1 00:10:49.466 --rc geninfo_unexecuted_blocks=1 00:10:49.466 00:10:49.466 ' 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:49.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.466 --rc genhtml_branch_coverage=1 00:10:49.466 --rc genhtml_function_coverage=1 00:10:49.466 --rc genhtml_legend=1 00:10:49.466 --rc geninfo_all_blocks=1 00:10:49.466 --rc geninfo_unexecuted_blocks=1 00:10:49.466 00:10:49.466 ' 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:49.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.466 --rc genhtml_branch_coverage=1 00:10:49.466 --rc genhtml_function_coverage=1 00:10:49.466 --rc genhtml_legend=1 00:10:49.466 --rc geninfo_all_blocks=1 00:10:49.466 --rc geninfo_unexecuted_blocks=1 00:10:49.466 00:10:49.466 ' 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.466 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:49.830 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:49.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:49.831 21:03:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:57.972 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:57.972 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:57.972 Found net devices under 0000:31:00.0: cvl_0_0 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.972 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:57.972 Found net devices under 0000:31:00.1: cvl_0_1 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:57.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:10:57.973 00:10:57.973 --- 10.0.0.2 ping statistics --- 00:10:57.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.973 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:10:57.973 00:10:57.973 --- 10.0.0.1 ping statistics --- 00:10:57.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.973 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:57.973 21:03:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:57.973 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:57.973 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:57.973 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.973 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.973 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1948445 00:10:57.973 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1948445 00:10:57.973 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.973 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1948445 ']' 00:10:57.973 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.973 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.973 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.973 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.973 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.973 [2024-12-05 21:03:59.106256] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:10:57.973 [2024-12-05 21:03:59.106319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.973 [2024-12-05 21:03:59.200102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.973 [2024-12-05 21:03:59.241805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.973 [2024-12-05 21:03:59.241843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.973 [2024-12-05 21:03:59.241851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.973 [2024-12-05 21:03:59.241858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.973 [2024-12-05 21:03:59.241871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.973 [2024-12-05 21:03:59.243509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.973 [2024-12-05 21:03:59.243629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.973 [2024-12-05 21:03:59.243790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.973 [2024-12-05 21:03:59.243790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.545 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.545 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:58.545 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:58.545 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:58.545 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.545 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.545 21:03:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:58.806 [2024-12-05 21:04:00.116310] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.806 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.066 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:59.066 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.327 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:59.327 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.327 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:59.327 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.589 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:59.589 21:04:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:59.850 21:04:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.110 21:04:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:00.110 21:04:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.110 21:04:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:00.110 21:04:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.371 21:04:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:00.371 21:04:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:00.633 21:04:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.892 21:04:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:00.893 21:04:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.893 21:04:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:00.893 21:04:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:01.152 21:04:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.412 [2024-12-05 21:04:02.596262] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.412 21:04:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:01.412 21:04:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:01.672 21:04:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:03.585 21:04:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:03.585 21:04:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:03.585 21:04:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.585 21:04:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:03.585 21:04:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:03.585 21:04:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:05.504 21:04:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:05.504 21:04:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:05.504 21:04:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.504 21:04:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:05.504 21:04:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.504 21:04:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:05.504 21:04:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:05.504 [global] 00:11:05.504 thread=1 00:11:05.504 invalidate=1 00:11:05.504 rw=write 00:11:05.504 time_based=1 00:11:05.504 runtime=1 00:11:05.504 ioengine=libaio 00:11:05.504 direct=1 00:11:05.504 bs=4096 00:11:05.504 iodepth=1 00:11:05.504 norandommap=0 00:11:05.504 numjobs=1 00:11:05.504 00:11:05.504 verify_dump=1 00:11:05.504 verify_backlog=512 00:11:05.504 verify_state_save=0 00:11:05.504 do_verify=1 00:11:05.504 verify=crc32c-intel 00:11:05.504 [job0] 00:11:05.504 filename=/dev/nvme0n1 00:11:05.504 [job1] 00:11:05.504 filename=/dev/nvme0n2 00:11:05.504 [job2] 00:11:05.504 filename=/dev/nvme0n3 00:11:05.504 [job3] 00:11:05.504 filename=/dev/nvme0n4 00:11:05.504 Could not set queue depth (nvme0n1) 00:11:05.504 Could not set queue depth (nvme0n2) 00:11:05.504 Could not set queue depth (nvme0n3) 00:11:05.504 Could not set queue depth (nvme0n4) 00:11:05.765 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.765 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.765 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.765 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.765 fio-3.35 00:11:05.765 Starting 4 threads 00:11:07.174 00:11:07.174 job0: (groupid=0, jobs=1): err= 0: pid=1950340: Thu Dec 5 21:04:08 2024 00:11:07.174 read: IOPS=16, BW=65.6KiB/s (67.2kB/s)(68.0KiB/1036msec) 00:11:07.174 slat (nsec): min=14894, max=26472, avg=24907.53, stdev=3748.41 00:11:07.174 clat (usec): min=40972, max=42032, avg=41905.10, stdev=246.73 00:11:07.174 lat (usec): min=40987, max=42058, avg=41930.01, stdev=249.19 00:11:07.174 clat percentiles (usec): 00:11:07.174 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:11:07.174 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:07.174 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:07.174 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:07.174 | 99.99th=[42206] 00:11:07.174 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:11:07.174 slat (usec): min=5, max=177, avg=19.61, stdev=11.53 00:11:07.174 clat (usec): min=258, max=851, avg=605.79, stdev=104.12 00:11:07.174 lat (usec): min=266, max=874, avg=625.40, stdev=107.56 00:11:07.174 clat percentiles (usec): 00:11:07.174 | 1.00th=[ 371], 5.00th=[ 424], 10.00th=[ 465], 20.00th=[ 506], 00:11:07.174 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:11:07.175 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 734], 95.00th=[ 758], 00:11:07.175 | 99.00th=[ 807], 99.50th=[ 816], 99.90th=[ 848], 99.95th=[ 848], 00:11:07.175 | 99.99th=[ 848] 00:11:07.175 bw ( KiB/s): min= 4096, max= 4096, per=46.82%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.175 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.175 lat (usec) : 500=17.77%, 750=73.53%, 1000=5.48% 00:11:07.175 lat (msec) : 50=3.21% 00:11:07.175 cpu : usr=0.68%, sys=0.68%, ctx=533, majf=0, minf=1 00:11:07.175 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.175 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.175 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.175 job1: (groupid=0, jobs=1): err= 0: pid=1950341: Thu Dec 5 21:04:08 2024 00:11:07.175 read: IOPS=141, BW=567KiB/s (581kB/s)(568KiB/1001msec) 00:11:07.175 slat (nsec): min=8631, max=41034, avg=27648.60, stdev=2399.18 00:11:07.175 clat (usec): min=487, max=42053, avg=4681.51, stdev=11832.82 00:11:07.175 lat (usec): min=514, max=42080, avg=4709.16, stdev=11832.53 00:11:07.175 clat percentiles (usec): 00:11:07.175 | 1.00th=[ 498], 5.00th=[ 676], 10.00th=[ 725], 20.00th=[ 775], 00:11:07.175 | 30.00th=[ 889], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1045], 00:11:07.175 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[41681], 00:11:07.175 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:07.175 | 99.99th=[42206] 00:11:07.175 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:07.175 slat (nsec): min=9558, max=64020, avg=29952.66, stdev=11042.44 00:11:07.175 clat (usec): min=264, max=886, avg=608.82, stdev=113.76 00:11:07.175 lat (usec): min=277, max=922, avg=638.77, stdev=119.64 00:11:07.175 clat percentiles (usec): 00:11:07.175 | 1.00th=[ 347], 5.00th=[ 429], 10.00th=[ 461], 20.00th=[ 498], 00:11:07.175 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:11:07.175 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 791], 00:11:07.175 | 99.00th=[ 832], 99.50th=[ 848], 99.90th=[ 889], 99.95th=[ 889], 00:11:07.175 | 99.99th=[ 889] 00:11:07.175 bw ( KiB/s): min= 4096, max= 4096, per=46.82%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.175 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.175 lat (usec) : 500=16.36%, 750=57.34%, 1000=13.61% 00:11:07.175 lat (msec) : 2=10.70%, 50=1.99% 00:11:07.175 cpu : usr=1.50%, sys=2.20%, ctx=655, majf=0, minf=1 00:11:07.175 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.175 issued rwts: total=142,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.175 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.175 job2: (groupid=0, jobs=1): err= 0: pid=1950342: Thu Dec 5 21:04:08 2024 00:11:07.175 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:07.175 slat (nsec): min=27603, max=65076, avg=28834.75, stdev=3290.66 00:11:07.175 clat (usec): min=770, max=1215, avg=970.10, stdev=63.08 00:11:07.175 lat (usec): min=798, max=1264, avg=998.93, stdev=62.90 00:11:07.175 clat percentiles (usec): 00:11:07.175 | 1.00th=[ 791], 5.00th=[ 857], 10.00th=[ 881], 20.00th=[ 938], 00:11:07.175 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 979], 00:11:07.175 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1045], 95.00th=[ 1074], 00:11:07.175 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1221], 99.95th=[ 1221], 00:11:07.175 | 99.99th=[ 1221] 00:11:07.175 write: IOPS=729, BW=2917KiB/s (2987kB/s)(2920KiB/1001msec); 0 zone resets 00:11:07.175 slat (nsec): min=9589, max=58197, avg=32684.07, stdev=10216.27 00:11:07.175 clat (usec): min=186, max=971, avg=623.01, stdev=116.99 00:11:07.175 lat (usec): min=222, max=1022, avg=655.69, stdev=121.23 00:11:07.175 clat percentiles (usec): 00:11:07.175 | 1.00th=[ 351], 5.00th=[ 412], 10.00th=[ 474], 20.00th=[ 529], 00:11:07.175 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:11:07.175 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 799], 00:11:07.175 | 99.00th=[ 889], 99.50th=[ 922], 99.90th=[ 971], 99.95th=[ 971], 00:11:07.175 | 99.99th=[ 971] 00:11:07.175 bw ( KiB/s): min= 4096, max= 4096, per=46.82%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.175 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.175 lat (usec) : 250=0.24%, 500=8.53%, 750=43.16%, 1000=36.88% 00:11:07.175 lat (msec) : 2=11.19% 00:11:07.175 cpu : usr=3.00%, sys=4.70%, ctx=1243, majf=0, minf=1 00:11:07.175 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.175 issued rwts: total=512,730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.175 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.175 job3: (groupid=0, jobs=1): err= 0: pid=1950343: Thu Dec 5 21:04:08 2024 00:11:07.175 read: IOPS=16, BW=67.7KiB/s (69.3kB/s)(68.0KiB/1005msec) 00:11:07.175 slat (nsec): min=27043, max=28168, avg=27320.41, stdev=278.02 00:11:07.175 clat (usec): min=1117, max=42097, avg=39494.27, stdev=9892.49 00:11:07.175 lat (usec): min=1145, max=42125, avg=39521.59, stdev=9892.50 00:11:07.175 clat percentiles (usec): 00:11:07.175 | 1.00th=[ 1123], 5.00th=[ 1123], 10.00th=[41157], 20.00th=[41681], 00:11:07.175 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:07.175 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:07.175 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:07.175 | 99.99th=[42206] 00:11:07.175 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:11:07.175 slat (nsec): min=3210, max=56675, avg=29910.12, stdev=10481.43 00:11:07.175 clat (usec): min=253, max=915, avg=613.06, stdev=106.55 00:11:07.175 lat (usec): min=263, max=950, avg=642.97, stdev=111.68 00:11:07.175 clat percentiles (usec): 00:11:07.175 | 1.00th=[ 351], 5.00th=[ 412], 10.00th=[ 469], 20.00th=[ 537], 00:11:07.175 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 644], 00:11:07.175 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 775], 00:11:07.175 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 914], 99.95th=[ 914], 00:11:07.175 | 99.99th=[ 914] 00:11:07.175 bw ( KiB/s): min= 4096, max= 4096, per=46.82%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.175 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.175 lat (usec) : 500=14.93%, 750=73.16%, 1000=8.70% 00:11:07.175 lat (msec) : 2=0.19%, 50=3.02% 00:11:07.175 cpu : usr=0.70%, sys=2.29%, ctx=530, majf=0, minf=2 00:11:07.175 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.175 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.175 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.175 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.175 00:11:07.175 Run status group 0 (all jobs): 00:11:07.175 READ: bw=2656KiB/s (2720kB/s), 65.6KiB/s-2046KiB/s (67.2kB/s-2095kB/s), io=2752KiB (2818kB), run=1001-1036msec 00:11:07.175 WRITE: bw=8749KiB/s (8959kB/s), 1977KiB/s-2917KiB/s (2024kB/s-2987kB/s), io=9064KiB (9282kB), run=1001-1036msec 00:11:07.175 00:11:07.175 Disk stats (read/write): 00:11:07.175 nvme0n1: ios=58/512, merge=0/0, ticks=556/302, in_queue=858, util=86.87% 00:11:07.175 nvme0n2: ios=61/512, merge=0/0, ticks=942/256, in_queue=1198, util=87.95% 00:11:07.175 nvme0n3: ios=538/512, merge=0/0, ticks=1064/259, in_queue=1323, util=91.96% 00:11:07.175 nvme0n4: ios=69/512, merge=0/0, ticks=555/249, in_queue=804, util=97.21% 00:11:07.175 21:04:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:07.175 [global] 00:11:07.175 thread=1 00:11:07.175 invalidate=1 00:11:07.175 rw=randwrite 00:11:07.175 time_based=1 00:11:07.175 runtime=1 00:11:07.175 ioengine=libaio 00:11:07.175 direct=1 00:11:07.175 bs=4096 00:11:07.175 iodepth=1 00:11:07.175 norandommap=0 00:11:07.175 numjobs=1 00:11:07.175 00:11:07.175 verify_dump=1 00:11:07.175 verify_backlog=512 00:11:07.175 verify_state_save=0 00:11:07.175 do_verify=1 00:11:07.175 verify=crc32c-intel 00:11:07.175 [job0] 00:11:07.175 filename=/dev/nvme0n1 00:11:07.175 [job1] 00:11:07.175 filename=/dev/nvme0n2 00:11:07.175 [job2] 00:11:07.175 filename=/dev/nvme0n3 00:11:07.175 [job3] 00:11:07.175 filename=/dev/nvme0n4 00:11:07.175 Could not set queue depth (nvme0n1) 00:11:07.175 Could not set queue depth (nvme0n2) 00:11:07.175 Could not set queue depth (nvme0n3) 00:11:07.175 Could not set queue depth (nvme0n4) 00:11:07.438 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.439 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.439 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.439 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.439 fio-3.35 00:11:07.439 Starting 4 threads 00:11:08.822 00:11:08.822 job0: (groupid=0, jobs=1): err= 0: pid=1950864: Thu Dec 5 21:04:09 2024 00:11:08.822 read: IOPS=496, BW=1986KiB/s (2034kB/s)(2052KiB/1033msec) 00:11:08.822 slat (nsec): min=6971, max=58946, avg=23757.65, stdev=7977.06 00:11:08.822 clat (usec): min=466, max=41133, avg=878.06, stdev=1782.50 00:11:08.822 lat (usec): min=493, max=41160, avg=901.82, stdev=1782.68 00:11:08.822 clat percentiles (usec): 00:11:08.822 | 1.00th=[ 586], 5.00th=[ 668], 10.00th=[ 701], 20.00th=[ 734], 00:11:08.822 | 30.00th=[ 775], 40.00th=[ 799], 50.00th=[ 816], 60.00th=[ 824], 00:11:08.822 | 70.00th=[ 840], 80.00th=[ 865], 90.00th=[ 889], 95.00th=[ 914], 00:11:08.822 | 99.00th=[ 955], 99.50th=[ 963], 99.90th=[41157], 99.95th=[41157], 00:11:08.822 | 99.99th=[41157] 00:11:08.822 write: IOPS=991, BW=3965KiB/s (4060kB/s)(4096KiB/1033msec); 0 zone resets 00:11:08.822 slat (nsec): min=9798, max=92748, avg=28716.16, stdev=10798.47 00:11:08.822 clat (usec): min=163, max=846, avg=517.38, stdev=134.06 00:11:08.822 lat (usec): min=173, max=897, avg=546.10, stdev=138.54 00:11:08.822 clat percentiles (usec): 00:11:08.822 | 1.00th=[ 260], 5.00th=[ 310], 10.00th=[ 347], 20.00th=[ 396], 00:11:08.822 | 30.00th=[ 441], 40.00th=[ 469], 50.00th=[ 498], 60.00th=[ 545], 00:11:08.822 | 70.00th=[ 594], 80.00th=[ 652], 90.00th=[ 709], 95.00th=[ 742], 00:11:08.822 | 99.00th=[ 791], 99.50th=[ 799], 99.90th=[ 840], 99.95th=[ 848], 00:11:08.822 | 99.99th=[ 848] 00:11:08.822 bw ( KiB/s): min= 4096, max= 4096, per=34.70%, avg=4096.00, stdev= 0.00, samples=2 00:11:08.822 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:11:08.822 lat (usec) : 250=0.39%, 500=33.70%, 750=38.45%, 1000=27.39% 00:11:08.822 lat (msec) : 50=0.07% 00:11:08.822 cpu : usr=2.13%, sys=4.07%, ctx=1539, majf=0, minf=1 00:11:08.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.822 issued rwts: total=513,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.822 job1: (groupid=0, jobs=1): err= 0: pid=1950865: Thu Dec 5 21:04:09 2024 00:11:08.822 read: IOPS=779, BW=3117KiB/s (3192kB/s)(3120KiB/1001msec) 00:11:08.822 slat (nsec): min=6694, max=59025, avg=22959.88, stdev=6953.53 00:11:08.822 clat (usec): min=244, max=958, avg=676.07, stdev=149.51 00:11:08.822 lat (usec): min=250, max=983, avg=699.03, stdev=151.43 00:11:08.822 clat percentiles (usec): 00:11:08.822 | 1.00th=[ 297], 5.00th=[ 396], 10.00th=[ 469], 20.00th=[ 553], 00:11:08.822 | 30.00th=[ 594], 40.00th=[ 652], 50.00th=[ 693], 60.00th=[ 734], 00:11:08.822 | 70.00th=[ 783], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 881], 00:11:08.822 | 99.00th=[ 922], 99.50th=[ 930], 99.90th=[ 963], 99.95th=[ 963], 00:11:08.822 | 99.99th=[ 963] 00:11:08.822 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:08.822 slat (nsec): min=9131, max=66159, avg=28144.56, stdev=8322.95 00:11:08.822 clat (usec): min=97, max=849, avg=402.61, stdev=145.57 00:11:08.822 lat (usec): min=107, max=866, avg=430.75, stdev=149.28 00:11:08.823 clat percentiles (usec): 00:11:08.823 | 1.00th=[ 119], 5.00th=[ 135], 10.00th=[ 202], 20.00th=[ 269], 00:11:08.823 | 30.00th=[ 330], 40.00th=[ 367], 50.00th=[ 408], 60.00th=[ 453], 00:11:08.823 | 70.00th=[ 486], 80.00th=[ 529], 90.00th=[ 594], 95.00th=[ 627], 00:11:08.823 | 99.00th=[ 709], 99.50th=[ 742], 99.90th=[ 799], 99.95th=[ 848], 00:11:08.823 | 99.99th=[ 848] 00:11:08.823 bw ( KiB/s): min= 4096, max= 4096, per=34.70%, avg=4096.00, stdev= 0.00, samples=1 00:11:08.823 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:08.823 lat (usec) : 100=0.06%, 250=8.92%, 500=38.91%, 750=36.36%, 1000=15.74% 00:11:08.823 cpu : usr=3.00%, sys=4.50%, ctx=1804, majf=0, minf=2 00:11:08.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.823 issued rwts: total=780,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.823 job2: (groupid=0, jobs=1): err= 0: pid=1950866: Thu Dec 5 21:04:09 2024 00:11:08.823 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1010msec) 00:11:08.823 slat (nsec): min=26505, max=27369, avg=27025.41, stdev=281.69 00:11:08.823 clat (usec): min=40974, max=41994, avg=41520.58, stdev=412.75 00:11:08.823 lat (usec): min=41001, max=42021, avg=41547.61, stdev=412.69 00:11:08.823 clat percentiles (usec): 00:11:08.823 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:08.823 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:11:08.823 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:08.823 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:08.823 | 99.99th=[42206] 00:11:08.823 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:11:08.823 slat (nsec): min=9826, max=53226, avg=29412.96, stdev=10155.76 00:11:08.823 clat (usec): min=216, max=1173, avg=555.46, stdev=145.92 00:11:08.823 lat (usec): min=252, max=1207, avg=584.87, stdev=149.67 00:11:08.823 clat percentiles (usec): 00:11:08.823 | 1.00th=[ 265], 5.00th=[ 318], 10.00th=[ 379], 20.00th=[ 445], 00:11:08.823 | 30.00th=[ 482], 40.00th=[ 510], 50.00th=[ 537], 60.00th=[ 586], 00:11:08.823 | 70.00th=[ 619], 80.00th=[ 668], 90.00th=[ 758], 95.00th=[ 816], 00:11:08.823 | 99.00th=[ 914], 99.50th=[ 996], 99.90th=[ 1172], 99.95th=[ 1172], 00:11:08.823 | 99.99th=[ 1172] 00:11:08.823 bw ( KiB/s): min= 4096, max= 4096, per=34.70%, avg=4096.00, stdev= 0.00, samples=1 00:11:08.823 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:08.823 lat (usec) : 250=0.19%, 500=35.73%, 750=51.04%, 1000=9.64% 00:11:08.823 lat (msec) : 2=0.19%, 50=3.21% 00:11:08.823 cpu : usr=1.19%, sys=0.99%, ctx=531, majf=0, minf=1 00:11:08.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.823 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.823 job3: (groupid=0, jobs=1): err= 0: pid=1950868: Thu Dec 5 21:04:09 2024 00:11:08.823 read: IOPS=16, BW=65.3KiB/s (66.9kB/s)(68.0KiB/1041msec) 00:11:08.823 slat (nsec): min=10651, max=29372, avg=26369.35, stdev=4091.26 00:11:08.823 clat (usec): min=40939, max=42136, avg=41677.86, stdev=454.67 00:11:08.823 lat (usec): min=40967, max=42163, avg=41704.23, stdev=455.96 00:11:08.823 clat percentiles (usec): 00:11:08.823 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:08.823 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:08.823 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:08.823 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:08.823 | 99.99th=[42206] 00:11:08.823 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:11:08.823 slat (nsec): min=9177, max=63529, avg=30208.62, stdev=9541.36 00:11:08.823 clat (usec): min=236, max=1003, avg=610.70, stdev=120.92 00:11:08.823 lat (usec): min=246, max=1036, avg=640.91, stdev=124.95 00:11:08.823 clat percentiles (usec): 00:11:08.823 | 1.00th=[ 318], 5.00th=[ 383], 10.00th=[ 461], 20.00th=[ 506], 00:11:08.823 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:11:08.823 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 750], 95.00th=[ 783], 00:11:08.823 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 1004], 99.95th=[ 1004], 00:11:08.823 | 99.99th=[ 1004] 00:11:08.823 bw ( KiB/s): min= 4096, max= 4096, per=34.70%, avg=4096.00, stdev= 0.00, samples=1 00:11:08.823 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:08.823 lat (usec) : 250=0.38%, 500=17.58%, 750=69.00%, 1000=9.64% 00:11:08.823 lat (msec) : 2=0.19%, 50=3.21% 00:11:08.823 cpu : usr=1.44%, sys=1.54%, ctx=529, majf=0, minf=1 00:11:08.823 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.823 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.823 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.823 00:11:08.823 Run status group 0 (all jobs): 00:11:08.823 READ: bw=5099KiB/s (5221kB/s), 65.3KiB/s-3117KiB/s (66.9kB/s-3192kB/s), io=5308KiB (5435kB), run=1001-1041msec 00:11:08.823 WRITE: bw=11.5MiB/s (12.1MB/s), 1967KiB/s-4092KiB/s (2015kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1041msec 00:11:08.823 00:11:08.823 Disk stats (read/write): 00:11:08.823 nvme0n1: ios=548/779, merge=0/0, ticks=1361/371, in_queue=1732, util=99.40% 00:11:08.823 nvme0n2: ios=605/1024, merge=0/0, ticks=421/391, in_queue=812, util=89.30% 00:11:08.823 nvme0n3: ios=60/512, merge=0/0, ticks=765/268, in_queue=1033, util=96.20% 00:11:08.823 nvme0n4: ios=69/512, merge=0/0, ticks=598/244, in_queue=842, util=95.83% 00:11:08.823 21:04:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:08.823 [global] 00:11:08.823 thread=1 00:11:08.823 invalidate=1 00:11:08.823 rw=write 00:11:08.823 time_based=1 00:11:08.823 runtime=1 00:11:08.823 ioengine=libaio 00:11:08.823 direct=1 00:11:08.823 bs=4096 00:11:08.823 iodepth=128 00:11:08.823 norandommap=0 00:11:08.823 numjobs=1 00:11:08.823 00:11:08.823 verify_dump=1 00:11:08.823 verify_backlog=512 00:11:08.823 verify_state_save=0 00:11:08.823 do_verify=1 00:11:08.823 verify=crc32c-intel 00:11:08.823 [job0] 00:11:08.823 filename=/dev/nvme0n1 00:11:08.823 [job1] 00:11:08.823 filename=/dev/nvme0n2 00:11:08.823 [job2] 00:11:08.823 filename=/dev/nvme0n3 00:11:08.823 [job3] 00:11:08.823 filename=/dev/nvme0n4 00:11:08.823 Could not set queue depth (nvme0n1) 00:11:08.823 Could not set queue depth (nvme0n2) 00:11:08.823 Could not set queue depth (nvme0n3) 00:11:08.823 Could not set queue depth (nvme0n4) 00:11:09.084 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.084 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.084 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.084 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.084 fio-3.35 00:11:09.084 Starting 4 threads 00:11:10.471 00:11:10.471 job0: (groupid=0, jobs=1): err= 0: pid=1951389: Thu Dec 5 21:04:11 2024 00:11:10.471 read: IOPS=7699, BW=30.1MiB/s (31.5MB/s)(30.3MiB/1007msec) 00:11:10.471 slat (nsec): min=968, max=13510k, avg=64166.57, stdev=469047.75 00:11:10.471 clat (usec): min=2579, max=23839, avg=8575.95, stdev=2679.53 00:11:10.471 lat (usec): min=2586, max=23865, avg=8640.12, stdev=2703.75 00:11:10.471 clat percentiles (usec): 00:11:10.471 | 1.00th=[ 4555], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 6718], 00:11:10.471 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8586], 00:11:10.471 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[11338], 95.00th=[13829], 00:11:10.471 | 99.00th=[20579], 99.50th=[20579], 99.90th=[21627], 99.95th=[21890], 00:11:10.471 | 99.99th=[23725] 00:11:10.471 write: IOPS=8135, BW=31.8MiB/s (33.3MB/s)(32.0MiB/1007msec); 0 zone resets 00:11:10.471 slat (nsec): min=1627, max=6726.0k, avg=56090.91, stdev=393800.66 00:11:10.471 clat (usec): min=1183, max=21815, avg=7473.22, stdev=3131.03 00:11:10.471 lat (usec): min=1193, max=21818, avg=7529.31, stdev=3151.14 00:11:10.471 clat percentiles (usec): 00:11:10.471 | 1.00th=[ 3326], 5.00th=[ 3720], 10.00th=[ 4293], 20.00th=[ 4817], 00:11:10.471 | 30.00th=[ 5538], 40.00th=[ 6194], 50.00th=[ 6718], 60.00th=[ 7111], 00:11:10.471 | 70.00th=[ 8356], 80.00th=[ 9503], 90.00th=[12387], 95.00th=[14222], 00:11:10.471 | 99.00th=[15926], 99.50th=[16909], 99.90th=[17695], 99.95th=[17695], 00:11:10.471 | 99.99th=[21890] 00:11:10.471 bw ( KiB/s): min=32328, max=32768, per=36.01%, avg=32548.00, stdev=311.13, samples=2 00:11:10.471 iops : min= 8082, max= 8192, avg=8137.00, stdev=77.78, samples=2 00:11:10.471 lat (msec) : 2=0.01%, 4=4.31%, 10=76.84%, 20=18.16%, 50=0.68% 00:11:10.471 cpu : usr=4.97%, sys=9.54%, ctx=433, majf=0, minf=2 00:11:10.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:10.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.471 issued rwts: total=7753,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.471 job1: (groupid=0, jobs=1): err= 0: pid=1951390: Thu Dec 5 21:04:11 2024 00:11:10.471 read: IOPS=5706, BW=22.3MiB/s (23.4MB/s)(22.4MiB/1004msec) 00:11:10.471 slat (nsec): min=923, max=44459k, avg=88857.29, stdev=752762.52 00:11:10.471 clat (usec): min=1309, max=55681, avg=11609.21, stdev=6755.02 00:11:10.471 lat (usec): min=2583, max=55690, avg=11698.07, stdev=6778.93 00:11:10.471 clat percentiles (usec): 00:11:10.471 | 1.00th=[ 4621], 5.00th=[ 6456], 10.00th=[ 7111], 20.00th=[ 8455], 00:11:10.471 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[10683], 60.00th=[11076], 00:11:10.471 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13698], 95.00th=[16188], 00:11:10.471 | 99.00th=[52691], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:11:10.471 | 99.99th=[55837] 00:11:10.471 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:11:10.471 slat (nsec): min=1612, max=5859.5k, avg=73007.62, stdev=405766.83 00:11:10.471 clat (usec): min=1212, max=20534, avg=9806.56, stdev=2839.73 00:11:10.471 lat (usec): min=1220, max=20541, avg=9879.57, stdev=2854.97 00:11:10.471 clat percentiles (usec): 00:11:10.471 | 1.00th=[ 3687], 5.00th=[ 5276], 10.00th=[ 6587], 20.00th=[ 7308], 00:11:10.471 | 30.00th=[ 8356], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10552], 00:11:10.471 | 70.00th=[10945], 80.00th=[11994], 90.00th=[12518], 95.00th=[14484], 00:11:10.471 | 99.00th=[18744], 99.50th=[18744], 99.90th=[20579], 99.95th=[20579], 00:11:10.471 | 99.99th=[20579] 00:11:10.471 bw ( KiB/s): min=20232, max=28672, per=27.05%, avg=24452.00, stdev=5967.98, samples=2 00:11:10.471 iops : min= 5058, max= 7168, avg=6113.00, stdev=1492.00, samples=2 00:11:10.471 lat (msec) : 2=0.09%, 4=1.06%, 10=41.03%, 20=56.20%, 50=0.55% 00:11:10.471 lat (msec) : 100=1.07% 00:11:10.471 cpu : usr=3.69%, sys=6.18%, ctx=563, majf=0, minf=1 00:11:10.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:10.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.471 issued rwts: total=5729,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.471 job2: (groupid=0, jobs=1): err= 0: pid=1951391: Thu Dec 5 21:04:11 2024 00:11:10.471 read: IOPS=1937, BW=7750KiB/s (7936kB/s)(7812KiB/1008msec) 00:11:10.471 slat (usec): min=3, max=20300, avg=315.07, stdev=1987.34 00:11:10.471 clat (usec): min=1177, max=67567, avg=40601.61, stdev=19151.95 00:11:10.471 lat (usec): min=10651, max=67576, avg=40916.69, stdev=19182.48 00:11:10.471 clat percentiles (usec): 00:11:10.471 | 1.00th=[10814], 5.00th=[14091], 10.00th=[14877], 20.00th=[15926], 00:11:10.471 | 30.00th=[25035], 40.00th=[33817], 50.00th=[47449], 60.00th=[52167], 00:11:10.471 | 70.00th=[56886], 80.00th=[58983], 90.00th=[61604], 95.00th=[65799], 00:11:10.471 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:11:10.471 | 99.99th=[67634] 00:11:10.471 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:11:10.471 slat (usec): min=6, max=19779, avg=183.88, stdev=1155.93 00:11:10.471 clat (usec): min=8681, max=44729, avg=22929.11, stdev=9745.12 00:11:10.471 lat (usec): min=11061, max=51935, avg=23112.99, stdev=9763.68 00:11:10.471 clat percentiles (usec): 00:11:10.471 | 1.00th=[10421], 5.00th=[11600], 10.00th=[11731], 20.00th=[12125], 00:11:10.471 | 30.00th=[13304], 40.00th=[17171], 50.00th=[24773], 60.00th=[26346], 00:11:10.471 | 70.00th=[27919], 80.00th=[32900], 90.00th=[36439], 95.00th=[40109], 00:11:10.471 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:11:10.471 | 99.99th=[44827] 00:11:10.471 bw ( KiB/s): min= 8192, max= 8192, per=9.06%, avg=8192.00, stdev= 0.00, samples=2 00:11:10.471 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:11:10.471 lat (msec) : 2=0.02%, 10=0.45%, 20=34.87%, 50=41.34%, 100=23.32% 00:11:10.471 cpu : usr=1.89%, sys=2.48%, ctx=130, majf=0, minf=3 00:11:10.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:10.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.471 issued rwts: total=1953,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.471 job3: (groupid=0, jobs=1): err= 0: pid=1951392: Thu Dec 5 21:04:11 2024 00:11:10.471 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:11:10.471 slat (nsec): min=964, max=6462.1k, avg=80632.37, stdev=430800.55 00:11:10.471 clat (usec): min=5800, max=24697, avg=10433.32, stdev=2701.34 00:11:10.471 lat (usec): min=5802, max=28276, avg=10513.96, stdev=2728.82 00:11:10.471 clat percentiles (usec): 00:11:10.471 | 1.00th=[ 6718], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[ 8586], 00:11:10.471 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[10159], 00:11:10.471 | 70.00th=[11207], 80.00th=[12256], 90.00th=[13173], 95.00th=[14615], 00:11:10.471 | 99.00th=[22676], 99.50th=[22938], 99.90th=[24773], 99.95th=[24773], 00:11:10.471 | 99.99th=[24773] 00:11:10.471 write: IOPS=6372, BW=24.9MiB/s (26.1MB/s)(25.0MiB/1003msec); 0 zone resets 00:11:10.471 slat (nsec): min=1644, max=10573k, avg=74832.33, stdev=453135.22 00:11:10.471 clat (usec): min=693, max=37379, avg=9820.79, stdev=4111.59 00:11:10.471 lat (usec): min=3912, max=37411, avg=9895.63, stdev=4156.91 00:11:10.471 clat percentiles (usec): 00:11:10.471 | 1.00th=[ 4555], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7635], 00:11:10.471 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8848], 00:11:10.471 | 70.00th=[ 9372], 80.00th=[10814], 90.00th=[14091], 95.00th=[19792], 00:11:10.471 | 99.00th=[26870], 99.50th=[29492], 99.90th=[29492], 99.95th=[32375], 00:11:10.471 | 99.99th=[37487] 00:11:10.471 bw ( KiB/s): min=24576, max=25536, per=27.72%, avg=25056.00, stdev=678.82, samples=2 00:11:10.471 iops : min= 6144, max= 6384, avg=6264.00, stdev=169.71, samples=2 00:11:10.471 lat (usec) : 750=0.01% 00:11:10.471 lat (msec) : 4=0.06%, 10=66.31%, 20=30.51%, 50=3.11% 00:11:10.471 cpu : usr=4.09%, sys=5.59%, ctx=679, majf=0, minf=1 00:11:10.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:10.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.471 issued rwts: total=6144,6392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.471 00:11:10.471 Run status group 0 (all jobs): 00:11:10.471 READ: bw=83.6MiB/s (87.7MB/s), 7750KiB/s-30.1MiB/s (7936kB/s-31.5MB/s), io=84.3MiB (88.4MB), run=1003-1008msec 00:11:10.471 WRITE: bw=88.3MiB/s (92.6MB/s), 8127KiB/s-31.8MiB/s (8322kB/s-33.3MB/s), io=89.0MiB (93.3MB), run=1003-1008msec 00:11:10.471 00:11:10.471 Disk stats (read/write): 00:11:10.472 nvme0n1: ios=6630/6656, merge=0/0, ticks=54325/45993, in_queue=100318, util=88.08% 00:11:10.472 nvme0n2: ios=5144/5163, merge=0/0, ticks=28303/23722, in_queue=52025, util=98.27% 00:11:10.472 nvme0n3: ios=1536/1536, merge=0/0, ticks=17372/8665, in_queue=26037, util=88.38% 00:11:10.472 nvme0n4: ios=5177/5287, merge=0/0, ticks=16807/16387, in_queue=33194, util=97.33% 00:11:10.472 21:04:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:10.472 [global] 00:11:10.472 thread=1 00:11:10.472 invalidate=1 00:11:10.472 rw=randwrite 00:11:10.472 time_based=1 00:11:10.472 runtime=1 00:11:10.472 ioengine=libaio 00:11:10.472 direct=1 00:11:10.472 bs=4096 00:11:10.472 iodepth=128 00:11:10.472 norandommap=0 00:11:10.472 numjobs=1 00:11:10.472 00:11:10.472 verify_dump=1 00:11:10.472 verify_backlog=512 00:11:10.472 verify_state_save=0 00:11:10.472 do_verify=1 00:11:10.472 verify=crc32c-intel 00:11:10.472 [job0] 00:11:10.472 filename=/dev/nvme0n1 00:11:10.472 [job1] 00:11:10.472 filename=/dev/nvme0n2 00:11:10.472 [job2] 00:11:10.472 filename=/dev/nvme0n3 00:11:10.472 [job3] 00:11:10.472 filename=/dev/nvme0n4 00:11:10.472 Could not set queue depth (nvme0n1) 00:11:10.472 Could not set queue depth (nvme0n2) 00:11:10.472 Could not set queue depth (nvme0n3) 00:11:10.472 Could not set queue depth (nvme0n4) 00:11:10.733 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.733 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.733 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.733 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.733 fio-3.35 00:11:10.733 Starting 4 threads 00:11:12.122 00:11:12.122 job0: (groupid=0, jobs=1): err= 0: pid=1951919: Thu Dec 5 21:04:13 2024 00:11:12.122 read: IOPS=5391, BW=21.1MiB/s (22.1MB/s)(22.0MiB/1046msec) 00:11:12.122 slat (nsec): min=887, max=10501k, avg=92399.12, stdev=554059.86 00:11:12.122 clat (usec): min=3795, max=52302, avg=11704.90, stdev=5231.06 00:11:12.122 lat (usec): min=3814, max=52310, avg=11797.30, stdev=5265.15 00:11:12.122 clat percentiles (usec): 00:11:12.122 | 1.00th=[ 4752], 5.00th=[ 7242], 10.00th=[ 8356], 20.00th=[ 9110], 00:11:12.122 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10552], 00:11:12.122 | 70.00th=[10945], 80.00th=[11600], 90.00th=[18482], 95.00th=[25035], 00:11:12.122 | 99.00th=[30540], 99.50th=[31065], 99.90th=[52167], 99.95th=[52167], 00:11:12.122 | 99.99th=[52167] 00:11:12.122 write: IOPS=5873, BW=22.9MiB/s (24.1MB/s)(24.0MiB/1046msec); 0 zone resets 00:11:12.122 slat (nsec): min=1486, max=9327.2k, avg=74697.73, stdev=468497.02 00:11:12.122 clat (usec): min=1109, max=69017, avg=10849.38, stdev=7945.12 00:11:12.122 lat (usec): min=1118, max=72352, avg=10924.07, stdev=7965.10 00:11:12.122 clat percentiles (usec): 00:11:12.122 | 1.00th=[ 4293], 5.00th=[ 6128], 10.00th=[ 6849], 20.00th=[ 8225], 00:11:12.122 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:11:12.122 | 70.00th=[ 9765], 80.00th=[11207], 90.00th=[15926], 95.00th=[19792], 00:11:12.122 | 99.00th=[60556], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:11:12.122 | 99.99th=[68682] 00:11:12.122 bw ( KiB/s): min=21768, max=26432, per=26.91%, avg=24100.00, stdev=3297.95, samples=2 00:11:12.122 iops : min= 5442, max= 6608, avg=6025.00, stdev=824.49, samples=2 00:11:12.122 lat (msec) : 2=0.02%, 4=0.37%, 10=61.56%, 20=31.70%, 50=5.28% 00:11:12.122 lat (msec) : 100=1.07% 00:11:12.122 cpu : usr=2.68%, sys=4.02%, ctx=540, majf=0, minf=1 00:11:12.122 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:12.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.122 issued rwts: total=5640,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.122 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.122 job1: (groupid=0, jobs=1): err= 0: pid=1951920: Thu Dec 5 21:04:13 2024 00:11:12.122 read: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec) 00:11:12.122 slat (nsec): min=887, max=3334.5k, avg=65542.98, stdev=331628.32 00:11:12.122 clat (usec): min=5306, max=12356, avg=8358.25, stdev=1275.18 00:11:12.122 lat (usec): min=5308, max=12358, avg=8423.79, stdev=1290.62 00:11:12.122 clat percentiles (usec): 00:11:12.122 | 1.00th=[ 5866], 5.00th=[ 6718], 10.00th=[ 7111], 20.00th=[ 7308], 00:11:12.122 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8586], 00:11:12.122 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10683], 00:11:12.122 | 99.00th=[11731], 99.50th=[11863], 99.90th=[12125], 99.95th=[12256], 00:11:12.122 | 99.99th=[12387] 00:11:12.122 write: IOPS=8047, BW=31.4MiB/s (33.0MB/s)(31.5MiB/1001msec); 0 zone resets 00:11:12.122 slat (nsec): min=1485, max=3968.0k, avg=59165.50, stdev=281321.87 00:11:12.122 clat (usec): min=642, max=14628, avg=7724.95, stdev=1236.76 00:11:12.122 lat (usec): min=2545, max=14638, avg=7784.11, stdev=1242.57 00:11:12.122 clat percentiles (usec): 00:11:12.122 | 1.00th=[ 4883], 5.00th=[ 6128], 10.00th=[ 6456], 20.00th=[ 6849], 00:11:12.122 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7767], 00:11:12.122 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[ 9503], 00:11:12.122 | 99.00th=[11076], 99.50th=[12649], 99.90th=[14615], 99.95th=[14615], 00:11:12.122 | 99.99th=[14615] 00:11:12.123 bw ( KiB/s): min=28992, max=28992, per=32.38%, avg=28992.00, stdev= 0.00, samples=1 00:11:12.123 iops : min= 7248, max= 7248, avg=7248.00, stdev= 0.00, samples=1 00:11:12.123 lat (usec) : 750=0.01% 00:11:12.123 lat (msec) : 4=0.20%, 10=92.93%, 20=6.86% 00:11:12.123 cpu : usr=2.00%, sys=5.30%, ctx=973, majf=0, minf=2 00:11:12.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:12.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.123 issued rwts: total=7680,8056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.123 job2: (groupid=0, jobs=1): err= 0: pid=1951921: Thu Dec 5 21:04:13 2024 00:11:12.123 read: IOPS=4139, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1005msec) 00:11:12.123 slat (nsec): min=947, max=10354k, avg=115543.22, stdev=727683.77 00:11:12.123 clat (usec): min=1453, max=28315, avg=14702.99, stdev=4223.96 00:11:12.123 lat (usec): min=6368, max=28341, avg=14818.53, stdev=4276.51 00:11:12.123 clat percentiles (usec): 00:11:12.123 | 1.00th=[ 7177], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[10028], 00:11:12.123 | 30.00th=[11863], 40.00th=[14091], 50.00th=[15139], 60.00th=[15926], 00:11:12.123 | 70.00th=[17433], 80.00th=[18482], 90.00th=[19792], 95.00th=[21365], 00:11:12.123 | 99.00th=[24249], 99.50th=[24511], 99.90th=[25822], 99.95th=[26608], 00:11:12.123 | 99.99th=[28443] 00:11:12.123 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:11:12.123 slat (nsec): min=1565, max=11071k, avg=104228.92, stdev=653665.86 00:11:12.123 clat (usec): min=885, max=34469, avg=14298.41, stdev=6336.09 00:11:12.123 lat (usec): min=897, max=34475, avg=14402.64, stdev=6397.60 00:11:12.123 clat percentiles (usec): 00:11:12.123 | 1.00th=[ 5211], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 8225], 00:11:12.123 | 30.00th=[10290], 40.00th=[13173], 50.00th=[13435], 60.00th=[13960], 00:11:12.123 | 70.00th=[16188], 80.00th=[19268], 90.00th=[24773], 95.00th=[26870], 00:11:12.123 | 99.00th=[31589], 99.50th=[32375], 99.90th=[34341], 99.95th=[34341], 00:11:12.123 | 99.99th=[34341] 00:11:12.123 bw ( KiB/s): min=16384, max=19968, per=20.30%, avg=18176.00, stdev=2534.27, samples=2 00:11:12.123 iops : min= 4096, max= 4992, avg=4544.00, stdev=633.57, samples=2 00:11:12.123 lat (usec) : 1000=0.10% 00:11:12.123 lat (msec) : 2=0.14%, 4=0.07%, 10=24.10%, 20=62.41%, 50=13.18% 00:11:12.123 cpu : usr=3.29%, sys=5.28%, ctx=342, majf=0, minf=1 00:11:12.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:12.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.123 issued rwts: total=4160,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.123 job3: (groupid=0, jobs=1): err= 0: pid=1951922: Thu Dec 5 21:04:13 2024 00:11:12.123 read: IOPS=4461, BW=17.4MiB/s (18.3MB/s)(17.5MiB/1003msec) 00:11:12.123 slat (nsec): min=961, max=8903.6k, avg=115800.19, stdev=739354.65 00:11:12.123 clat (usec): min=2249, max=50529, avg=15073.42, stdev=10776.15 00:11:12.123 lat (usec): min=3200, max=50559, avg=15189.22, stdev=10866.41 00:11:12.123 clat percentiles (usec): 00:11:12.123 | 1.00th=[ 3720], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7504], 00:11:12.123 | 30.00th=[ 7767], 40.00th=[ 8586], 50.00th=[10028], 60.00th=[12911], 00:11:12.123 | 70.00th=[16057], 80.00th=[20317], 90.00th=[35390], 95.00th=[39584], 00:11:12.123 | 99.00th=[42730], 99.50th=[43779], 99.90th=[46924], 99.95th=[47973], 00:11:12.123 | 99.99th=[50594] 00:11:12.123 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:11:12.123 slat (nsec): min=1597, max=8266.6k, avg=94912.57, stdev=593537.19 00:11:12.123 clat (usec): min=773, max=46606, avg=12945.30, stdev=8747.46 00:11:12.123 lat (usec): min=781, max=46614, avg=13040.21, stdev=8820.97 00:11:12.123 clat percentiles (usec): 00:11:12.123 | 1.00th=[ 3097], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 6980], 00:11:12.123 | 30.00th=[ 7242], 40.00th=[ 7701], 50.00th=[ 8848], 60.00th=[12649], 00:11:12.123 | 70.00th=[13304], 80.00th=[19792], 90.00th=[25560], 95.00th=[32900], 00:11:12.123 | 99.00th=[41681], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:11:12.123 | 99.99th=[46400] 00:11:12.123 bw ( KiB/s): min=12288, max=24576, per=20.58%, avg=18432.00, stdev=8688.93, samples=2 00:11:12.123 iops : min= 3072, max= 6144, avg=4608.00, stdev=2172.23, samples=2 00:11:12.123 lat (usec) : 1000=0.03% 00:11:12.123 lat (msec) : 2=0.10%, 4=1.56%, 10=49.54%, 20=27.92%, 50=20.83% 00:11:12.123 lat (msec) : 100=0.01% 00:11:12.123 cpu : usr=3.29%, sys=5.69%, ctx=354, majf=0, minf=1 00:11:12.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:12.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.123 issued rwts: total=4475,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.123 00:11:12.123 Run status group 0 (all jobs): 00:11:12.123 READ: bw=82.0MiB/s (86.0MB/s), 16.2MiB/s-30.0MiB/s (17.0MB/s-31.4MB/s), io=85.8MiB (89.9MB), run=1001-1046msec 00:11:12.123 WRITE: bw=87.4MiB/s (91.7MB/s), 17.9MiB/s-31.4MiB/s (18.8MB/s-33.0MB/s), io=91.5MiB (95.9MB), run=1001-1046msec 00:11:12.123 00:11:12.123 Disk stats (read/write): 00:11:12.123 nvme0n1: ios=5689/6144, merge=0/0, ticks=20871/18803, in_queue=39674, util=85.57% 00:11:12.123 nvme0n2: ios=6059/6144, merge=0/0, ticks=15188/13385, in_queue=28573, util=81.45% 00:11:12.123 nvme0n3: ios=3072/3303, merge=0/0, ticks=26781/26899, in_queue=53680, util=86.33% 00:11:12.123 nvme0n4: ios=3596/3584, merge=0/0, ticks=28123/22098, in_queue=50221, util=97.03% 00:11:12.123 21:04:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:12.123 21:04:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1952254 00:11:12.123 21:04:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:12.123 21:04:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:12.123 [global] 00:11:12.123 thread=1 00:11:12.123 invalidate=1 00:11:12.123 rw=read 00:11:12.123 time_based=1 00:11:12.123 runtime=10 00:11:12.123 ioengine=libaio 00:11:12.123 direct=1 00:11:12.123 bs=4096 00:11:12.123 iodepth=1 00:11:12.123 norandommap=1 00:11:12.123 numjobs=1 00:11:12.123 00:11:12.123 [job0] 00:11:12.123 filename=/dev/nvme0n1 00:11:12.123 [job1] 00:11:12.123 filename=/dev/nvme0n2 00:11:12.123 [job2] 00:11:12.123 filename=/dev/nvme0n3 00:11:12.123 [job3] 00:11:12.123 filename=/dev/nvme0n4 00:11:12.123 Could not set queue depth (nvme0n1) 00:11:12.123 Could not set queue depth (nvme0n2) 00:11:12.123 Could not set queue depth (nvme0n3) 00:11:12.123 Could not set queue depth (nvme0n4) 00:11:12.385 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.385 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.385 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.385 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.385 fio-3.35 00:11:12.385 Starting 4 threads 00:11:15.697 21:04:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:15.697 21:04:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:15.697 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:11:15.697 fio: pid=1952443, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:15.697 21:04:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.697 21:04:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:15.697 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=278528, buflen=4096 00:11:15.698 fio: pid=1952442, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:15.698 21:04:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.698 21:04:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:15.698 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=294912, buflen=4096 00:11:15.698 fio: pid=1952440, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:15.959 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=11284480, buflen=4096 00:11:15.959 fio: pid=1952441, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:15.959 21:04:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.959 21:04:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:15.959 00:11:15.959 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1952440: Thu Dec 5 21:04:17 2024 00:11:15.959 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(288KiB/3002msec) 00:11:15.959 slat (usec): min=25, max=28734, avg=519.30, stdev=3450.25 00:11:15.959 clat (usec): min=919, max=42127, avg=41156.63, stdev=4828.13 00:11:15.959 lat (usec): min=961, max=70117, avg=41583.92, stdev=5909.74 00:11:15.959 clat percentiles (usec): 00:11:15.959 | 1.00th=[ 922], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:15.959 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:15.959 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:15.959 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:15.959 | 99.99th=[42206] 00:11:15.959 bw ( KiB/s): min= 96, max= 96, per=2.58%, avg=96.00, stdev= 0.00, samples=5 00:11:15.959 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:11:15.959 lat (usec) : 1000=1.37% 00:11:15.959 lat (msec) : 50=97.26% 00:11:15.959 cpu : usr=0.10%, sys=0.20%, ctx=75, majf=0, minf=1 00:11:15.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.959 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.959 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.959 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1952441: Thu Dec 5 21:04:17 2024 00:11:15.959 read: IOPS=866, BW=3466KiB/s (3550kB/s)(10.8MiB/3179msec) 00:11:15.959 slat (usec): min=7, max=20791, avg=49.30, stdev=509.12 00:11:15.959 clat (usec): min=379, max=6507, avg=1087.78, stdev=140.45 00:11:15.959 lat (usec): min=408, max=21809, avg=1137.08, stdev=525.73 00:11:15.959 clat percentiles (usec): 00:11:15.959 | 1.00th=[ 783], 5.00th=[ 914], 10.00th=[ 963], 20.00th=[ 1012], 00:11:15.959 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:11:15.959 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1205], 00:11:15.959 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1319], 99.95th=[ 1336], 00:11:15.959 | 99.99th=[ 6521] 00:11:15.959 bw ( KiB/s): min= 3430, max= 3552, per=94.06%, avg=3501.00, stdev=41.68, samples=6 00:11:15.959 iops : min= 857, max= 888, avg=875.17, stdev=10.59, samples=6 00:11:15.959 lat (usec) : 500=0.04%, 750=0.58%, 1000=15.97% 00:11:15.959 lat (msec) : 2=83.35%, 10=0.04% 00:11:15.959 cpu : usr=1.70%, sys=3.49%, ctx=2767, majf=0, minf=2 00:11:15.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.959 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.959 issued rwts: total=2756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.959 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1952442: Thu Dec 5 21:04:17 2024 00:11:15.959 read: IOPS=24, BW=96.2KiB/s (98.5kB/s)(272KiB/2828msec) 00:11:15.959 slat (usec): min=23, max=3588, avg=77.87, stdev=428.85 00:11:15.959 clat (usec): min=833, max=42105, avg=41186.72, stdev=4978.77 00:11:15.959 lat (usec): min=873, max=45068, avg=41265.36, stdev=4998.87 00:11:15.959 clat percentiles (usec): 00:11:15.959 | 1.00th=[ 832], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:11:15.959 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:15.959 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:15.959 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:15.959 | 99.99th=[42206] 00:11:15.959 bw ( KiB/s): min= 96, max= 96, per=2.58%, avg=96.00, stdev= 0.00, samples=5 00:11:15.959 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:11:15.959 lat (usec) : 1000=1.45% 00:11:15.959 lat (msec) : 50=97.10% 00:11:15.959 cpu : usr=0.04%, sys=0.04%, ctx=70, majf=0, minf=2 00:11:15.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.959 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.959 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.959 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1952443: Thu Dec 5 21:04:17 2024 00:11:15.959 read: IOPS=24, BW=96.6KiB/s (98.9kB/s)(252KiB/2608msec) 00:11:15.959 slat (nsec): min=26584, max=34806, avg=27093.75, stdev=1021.80 00:11:15.959 clat (usec): min=755, max=42100, avg=41007.11, stdev=5169.67 00:11:15.959 lat (usec): min=790, max=42127, avg=41034.21, stdev=5168.68 00:11:15.959 clat percentiles (usec): 00:11:15.959 | 1.00th=[ 758], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:15.959 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:11:15.959 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:15.959 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:15.959 | 99.99th=[42206] 00:11:15.959 bw ( KiB/s): min= 96, max= 104, per=2.61%, avg=97.60, stdev= 3.58, samples=5 00:11:15.959 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:11:15.959 lat (usec) : 1000=1.56% 00:11:15.959 lat (msec) : 50=96.88% 00:11:15.959 cpu : usr=0.00%, sys=0.15%, ctx=64, majf=0, minf=2 00:11:15.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.959 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.959 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.959 00:11:15.959 Run status group 0 (all jobs): 00:11:15.960 READ: bw=3722KiB/s (3811kB/s), 95.9KiB/s-3466KiB/s (98.2kB/s-3550kB/s), io=11.6MiB (12.1MB), run=2608-3179msec 00:11:15.960 00:11:15.960 Disk stats (read/write): 00:11:15.960 nvme0n1: ios=68/0, merge=0/0, ticks=2796/0, in_queue=2796, util=93.79% 00:11:15.960 nvme0n2: ios=2727/0, merge=0/0, ticks=3451/0, in_queue=3451, util=98.08% 00:11:15.960 nvme0n3: ios=62/0, merge=0/0, ticks=2554/0, in_queue=2554, util=95.99% 00:11:15.960 nvme0n4: ios=63/0, merge=0/0, ticks=2585/0, in_queue=2585, util=96.42% 00:11:15.960 21:04:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.960 21:04:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:16.220 21:04:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.220 21:04:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:16.481 21:04:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.481 21:04:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:16.481 21:04:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.481 21:04:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:16.742 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:16.742 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1952254 00:11:16.742 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:16.742 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.742 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.742 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:16.742 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:16.742 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.742 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:16.742 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.742 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:16.742 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:16.742 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:16.742 nvmf hotplug test: fio failed as expected 00:11:16.742 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.004 rmmod nvme_tcp 00:11:17.004 rmmod nvme_fabrics 00:11:17.004 rmmod nvme_keyring 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1948445 ']' 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1948445 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1948445 ']' 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1948445 00:11:17.004 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1948445 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1948445' 00:11:17.266 killing process with pid 1948445 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1948445 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1948445 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.266 21:04:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.815 00:11:19.815 real 0m30.070s 00:11:19.815 user 2m29.733s 00:11:19.815 sys 0m9.866s 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.815 ************************************ 00:11:19.815 END TEST nvmf_fio_target 00:11:19.815 ************************************ 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:19.815 ************************************ 00:11:19.815 START TEST nvmf_bdevio 00:11:19.815 ************************************ 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:19.815 * Looking for test storage... 00:11:19.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:19.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.815 --rc genhtml_branch_coverage=1 00:11:19.815 --rc genhtml_function_coverage=1 00:11:19.815 --rc genhtml_legend=1 00:11:19.815 --rc geninfo_all_blocks=1 00:11:19.815 --rc geninfo_unexecuted_blocks=1 00:11:19.815 00:11:19.815 ' 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:19.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.815 --rc genhtml_branch_coverage=1 00:11:19.815 --rc genhtml_function_coverage=1 00:11:19.815 --rc genhtml_legend=1 00:11:19.815 --rc geninfo_all_blocks=1 00:11:19.815 --rc geninfo_unexecuted_blocks=1 00:11:19.815 00:11:19.815 ' 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:19.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.815 --rc genhtml_branch_coverage=1 00:11:19.815 --rc genhtml_function_coverage=1 00:11:19.815 --rc genhtml_legend=1 00:11:19.815 --rc geninfo_all_blocks=1 00:11:19.815 --rc geninfo_unexecuted_blocks=1 00:11:19.815 00:11:19.815 ' 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:19.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.815 --rc genhtml_branch_coverage=1 00:11:19.815 --rc genhtml_function_coverage=1 00:11:19.815 --rc genhtml_legend=1 00:11:19.815 --rc geninfo_all_blocks=1 00:11:19.815 --rc geninfo_unexecuted_blocks=1 00:11:19.815 00:11:19.815 ' 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.815 21:04:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.815 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.816 21:04:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:27.962 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:27.962 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:27.962 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:27.963 Found net devices under 0000:31:00.0: cvl_0_0 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:27.963 Found net devices under 0000:31:00.1: cvl_0_1 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:27.963 21:04:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:27.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:11:27.963 00:11:27.963 --- 10.0.0.2 ping statistics --- 00:11:27.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.963 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:11:27.963 00:11:27.963 --- 10.0.0.1 ping statistics --- 00:11:27.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.963 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1958151 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1958151 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1958151 ']' 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.963 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.963 [2024-12-05 21:04:29.143176] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:11:27.963 [2024-12-05 21:04:29.143240] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.963 [2024-12-05 21:04:29.249424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.963 [2024-12-05 21:04:29.300321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.963 [2024-12-05 21:04:29.300369] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.963 [2024-12-05 21:04:29.300379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.963 [2024-12-05 21:04:29.300387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.963 [2024-12-05 21:04:29.300394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.963 [2024-12-05 21:04:29.302387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:27.963 [2024-12-05 21:04:29.302549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:27.963 [2024-12-05 21:04:29.302706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.963 [2024-12-05 21:04:29.302706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:28.536 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.536 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:28.536 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:28.536 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:28.536 21:04:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.799 [2024-12-05 21:04:30.011153] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.799 Malloc0 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.799 [2024-12-05 21:04:30.094935] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:28.799 { 00:11:28.799 "params": { 00:11:28.799 "name": "Nvme$subsystem", 00:11:28.799 "trtype": "$TEST_TRANSPORT", 00:11:28.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:28.799 "adrfam": "ipv4", 00:11:28.799 "trsvcid": "$NVMF_PORT", 00:11:28.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:28.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:28.799 "hdgst": ${hdgst:-false}, 00:11:28.799 "ddgst": ${ddgst:-false} 00:11:28.799 }, 00:11:28.799 "method": "bdev_nvme_attach_controller" 00:11:28.799 } 00:11:28.799 EOF 00:11:28.799 )") 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:28.799 21:04:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:28.799 "params": { 00:11:28.799 "name": "Nvme1", 00:11:28.799 "trtype": "tcp", 00:11:28.799 "traddr": "10.0.0.2", 00:11:28.799 "adrfam": "ipv4", 00:11:28.799 "trsvcid": "4420", 00:11:28.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:28.799 "hdgst": false, 00:11:28.799 "ddgst": false 00:11:28.799 }, 00:11:28.799 "method": "bdev_nvme_attach_controller" 00:11:28.799 }' 00:11:28.799 [2024-12-05 21:04:30.159793] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:11:28.799 [2024-12-05 21:04:30.159893] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1958307 ] 00:11:29.061 [2024-12-05 21:04:30.248644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:29.061 [2024-12-05 21:04:30.292899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.061 [2024-12-05 21:04:30.292970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.061 [2024-12-05 21:04:30.292973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.323 I/O targets: 00:11:29.323 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:29.323 00:11:29.323 00:11:29.323 CUnit - A unit testing framework for C - Version 2.1-3 00:11:29.323 http://cunit.sourceforge.net/ 00:11:29.323 00:11:29.323 00:11:29.323 Suite: bdevio tests on: Nvme1n1 00:11:29.323 Test: blockdev write read block ...passed 00:11:29.323 Test: blockdev write zeroes read block ...passed 00:11:29.323 Test: blockdev write zeroes read no split ...passed 00:11:29.323 Test: blockdev write zeroes read split ...passed 00:11:29.323 Test: blockdev write zeroes read split partial ...passed 00:11:29.323 Test: blockdev reset ...[2024-12-05 21:04:30.721715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:29.323 [2024-12-05 21:04:30.721780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5c0e0 (9): Bad file descriptor 00:11:29.323 [2024-12-05 21:04:30.734791] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:29.323 passed 00:11:29.584 Test: blockdev write read 8 blocks ...passed 00:11:29.584 Test: blockdev write read size > 128k ...passed 00:11:29.584 Test: blockdev write read invalid size ...passed 00:11:29.584 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:29.584 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:29.584 Test: blockdev write read max offset ...passed 00:11:29.584 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:29.584 Test: blockdev writev readv 8 blocks ...passed 00:11:29.584 Test: blockdev writev readv 30 x 1block ...passed 00:11:29.846 Test: blockdev writev readv block ...passed 00:11:29.846 Test: blockdev writev readv size > 128k ...passed 00:11:29.846 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:29.846 Test: blockdev comparev and writev ...[2024-12-05 21:04:31.035842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.846 [2024-12-05 21:04:31.035871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:29.846 [2024-12-05 21:04:31.035882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.846 [2024-12-05 21:04:31.035888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:29.846 [2024-12-05 21:04:31.036209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.846 [2024-12-05 21:04:31.036218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:29.846 [2024-12-05 21:04:31.036229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.846 [2024-12-05 21:04:31.036234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:29.846 [2024-12-05 21:04:31.036559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.846 [2024-12-05 21:04:31.036568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:29.846 [2024-12-05 21:04:31.036577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.846 [2024-12-05 21:04:31.036583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:29.846 [2024-12-05 21:04:31.036908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.846 [2024-12-05 21:04:31.036919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:29.846 [2024-12-05 21:04:31.036929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.846 [2024-12-05 21:04:31.036934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:29.846 passed 00:11:29.846 Test: blockdev nvme passthru rw ...passed 00:11:29.846 Test: blockdev nvme passthru vendor specific ...[2024-12-05 21:04:31.121303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:29.846 [2024-12-05 21:04:31.121313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:29.846 [2024-12-05 21:04:31.121530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:29.846 [2024-12-05 21:04:31.121538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:29.846 [2024-12-05 21:04:31.121755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:29.846 [2024-12-05 21:04:31.121763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:29.846 [2024-12-05 21:04:31.121976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:29.846 [2024-12-05 21:04:31.121985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:29.846 passed 00:11:29.846 Test: blockdev nvme admin passthru ...passed 00:11:29.846 Test: blockdev copy ...passed 00:11:29.846 00:11:29.846 Run Summary: Type Total Ran Passed Failed Inactive 00:11:29.846 suites 1 1 n/a 0 0 00:11:29.846 tests 23 23 23 0 0 00:11:29.846 asserts 152 152 152 0 n/a 00:11:29.846 00:11:29.846 Elapsed time = 1.167 seconds 00:11:29.846 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.846 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.846 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:30.107 rmmod nvme_tcp 00:11:30.107 rmmod nvme_fabrics 00:11:30.107 rmmod nvme_keyring 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1958151 ']' 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1958151 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1958151 ']' 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1958151 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1958151 00:11:30.107 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:30.108 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:30.108 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1958151' 00:11:30.108 killing process with pid 1958151 00:11:30.108 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1958151 00:11:30.108 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1958151 00:11:30.369 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:30.369 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:30.369 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:30.369 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:30.369 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:30.369 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:30.369 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:30.369 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:30.369 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:30.369 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.369 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.369 21:04:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.281 21:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:32.281 00:11:32.281 real 0m12.891s 00:11:32.281 user 0m13.639s 00:11:32.281 sys 0m6.838s 00:11:32.281 21:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.281 21:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.281 ************************************ 00:11:32.281 END TEST nvmf_bdevio 00:11:32.281 ************************************ 00:11:32.541 21:04:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:32.541 00:11:32.541 real 5m12.288s 00:11:32.541 user 11m40.347s 00:11:32.541 sys 1m56.104s 00:11:32.541 21:04:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.541 21:04:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:32.541 ************************************ 00:11:32.541 END TEST nvmf_target_core 00:11:32.541 ************************************ 00:11:32.541 21:04:33 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:32.541 21:04:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:32.541 21:04:33 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.541 21:04:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:32.541 ************************************ 00:11:32.541 START TEST nvmf_target_extra 00:11:32.541 ************************************ 00:11:32.541 21:04:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:32.541 * Looking for test storage... 00:11:32.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:32.541 21:04:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:32.541 21:04:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:32.541 21:04:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:32.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.803 --rc genhtml_branch_coverage=1 00:11:32.803 --rc genhtml_function_coverage=1 00:11:32.803 --rc genhtml_legend=1 00:11:32.803 --rc geninfo_all_blocks=1 00:11:32.803 --rc geninfo_unexecuted_blocks=1 00:11:32.803 00:11:32.803 ' 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:32.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.803 --rc genhtml_branch_coverage=1 00:11:32.803 --rc genhtml_function_coverage=1 00:11:32.803 --rc genhtml_legend=1 00:11:32.803 --rc geninfo_all_blocks=1 00:11:32.803 --rc geninfo_unexecuted_blocks=1 00:11:32.803 00:11:32.803 ' 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:32.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.803 --rc genhtml_branch_coverage=1 00:11:32.803 --rc genhtml_function_coverage=1 00:11:32.803 --rc genhtml_legend=1 00:11:32.803 --rc geninfo_all_blocks=1 00:11:32.803 --rc geninfo_unexecuted_blocks=1 00:11:32.803 00:11:32.803 ' 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:32.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.803 --rc genhtml_branch_coverage=1 00:11:32.803 --rc genhtml_function_coverage=1 00:11:32.803 --rc genhtml_legend=1 00:11:32.803 --rc geninfo_all_blocks=1 00:11:32.803 --rc geninfo_unexecuted_blocks=1 00:11:32.803 00:11:32.803 ' 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.803 ************************************ 00:11:32.803 START TEST nvmf_example 00:11:32.803 ************************************ 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:32.803 * Looking for test storage... 00:11:32.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:32.803 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:33.064 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:33.064 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.064 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.064 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.064 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:33.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.065 --rc genhtml_branch_coverage=1 00:11:33.065 --rc genhtml_function_coverage=1 00:11:33.065 --rc genhtml_legend=1 00:11:33.065 --rc geninfo_all_blocks=1 00:11:33.065 --rc geninfo_unexecuted_blocks=1 00:11:33.065 00:11:33.065 ' 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:33.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.065 --rc genhtml_branch_coverage=1 00:11:33.065 --rc genhtml_function_coverage=1 00:11:33.065 --rc genhtml_legend=1 00:11:33.065 --rc geninfo_all_blocks=1 00:11:33.065 --rc geninfo_unexecuted_blocks=1 00:11:33.065 00:11:33.065 ' 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:33.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.065 --rc genhtml_branch_coverage=1 00:11:33.065 --rc genhtml_function_coverage=1 00:11:33.065 --rc genhtml_legend=1 00:11:33.065 --rc geninfo_all_blocks=1 00:11:33.065 --rc geninfo_unexecuted_blocks=1 00:11:33.065 00:11:33.065 ' 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:33.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.065 --rc genhtml_branch_coverage=1 00:11:33.065 --rc genhtml_function_coverage=1 00:11:33.065 --rc genhtml_legend=1 00:11:33.065 --rc geninfo_all_blocks=1 00:11:33.065 --rc geninfo_unexecuted_blocks=1 00:11:33.065 00:11:33.065 ' 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.065 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.066 21:04:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:41.208 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:41.208 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:41.208 Found net devices under 0000:31:00.0: cvl_0_0 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:41.208 Found net devices under 0000:31:00.1: cvl_0_1 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.208 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.209 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:41.209 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:41.209 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.209 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.209 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:41.209 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:41.209 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.209 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.209 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.209 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.209 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:41.209 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:41.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:11:41.470 00:11:41.470 --- 10.0.0.2 ping statistics --- 00:11:41.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.470 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:11:41.470 00:11:41.470 --- 10.0.0.1 ping statistics --- 00:11:41.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.470 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1963584 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1963584 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1963584 ']' 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.470 21:04:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:42.413 21:04:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:54.640 Initializing NVMe Controllers 00:11:54.640 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:54.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:54.640 Initialization complete. Launching workers. 00:11:54.640 ======================================================== 00:11:54.640 Latency(us) 00:11:54.640 Device Information : IOPS MiB/s Average min max 00:11:54.640 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18672.84 72.94 3428.40 712.95 51903.28 00:11:54.640 ======================================================== 00:11:54.640 Total : 18672.84 72.94 3428.40 712.95 51903.28 00:11:54.640 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.640 rmmod nvme_tcp 00:11:54.640 rmmod nvme_fabrics 00:11:54.640 rmmod nvme_keyring 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1963584 ']' 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1963584 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1963584 ']' 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1963584 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1963584 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1963584' 00:11:54.640 killing process with pid 1963584 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1963584 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1963584 00:11:54.640 nvmf threads initialize successfully 00:11:54.640 bdev subsystem init successfully 00:11:54.640 created a nvmf target service 00:11:54.640 create targets's poll groups done 00:11:54.640 all subsystems of target started 00:11:54.640 nvmf target is running 00:11:54.640 all subsystems of target stopped 00:11:54.640 destroy targets's poll groups done 00:11:54.640 destroyed the nvmf target service 00:11:54.640 bdev subsystem finish successfully 00:11:54.640 nvmf threads destroy successfully 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.640 21:04:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.211 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:55.211 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:55.211 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:55.211 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:55.211 00:11:55.211 real 0m22.439s 00:11:55.211 user 0m47.529s 00:11:55.211 sys 0m7.557s 00:11:55.211 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.211 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:55.211 ************************************ 00:11:55.211 END TEST nvmf_example 00:11:55.211 ************************************ 00:11:55.211 21:04:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:55.211 21:04:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:55.211 21:04:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.211 21:04:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:55.211 ************************************ 00:11:55.211 START TEST nvmf_filesystem 00:11:55.211 ************************************ 00:11:55.211 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:55.474 * Looking for test storage... 00:11:55.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:55.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.475 --rc genhtml_branch_coverage=1 00:11:55.475 --rc genhtml_function_coverage=1 00:11:55.475 --rc genhtml_legend=1 00:11:55.475 --rc geninfo_all_blocks=1 00:11:55.475 --rc geninfo_unexecuted_blocks=1 00:11:55.475 00:11:55.475 ' 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:55.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.475 --rc genhtml_branch_coverage=1 00:11:55.475 --rc genhtml_function_coverage=1 00:11:55.475 --rc genhtml_legend=1 00:11:55.475 --rc geninfo_all_blocks=1 00:11:55.475 --rc geninfo_unexecuted_blocks=1 00:11:55.475 00:11:55.475 ' 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:55.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.475 --rc genhtml_branch_coverage=1 00:11:55.475 --rc genhtml_function_coverage=1 00:11:55.475 --rc genhtml_legend=1 00:11:55.475 --rc geninfo_all_blocks=1 00:11:55.475 --rc geninfo_unexecuted_blocks=1 00:11:55.475 00:11:55.475 ' 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:55.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.475 --rc genhtml_branch_coverage=1 00:11:55.475 --rc genhtml_function_coverage=1 00:11:55.475 --rc genhtml_legend=1 00:11:55.475 --rc geninfo_all_blocks=1 00:11:55.475 --rc geninfo_unexecuted_blocks=1 00:11:55.475 00:11:55.475 ' 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:55.475 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:55.476 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:55.477 #define SPDK_CONFIG_H 00:11:55.477 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:55.477 #define SPDK_CONFIG_APPS 1 00:11:55.477 #define SPDK_CONFIG_ARCH native 00:11:55.477 #undef SPDK_CONFIG_ASAN 00:11:55.477 #undef SPDK_CONFIG_AVAHI 00:11:55.477 #undef SPDK_CONFIG_CET 00:11:55.477 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:55.477 #define SPDK_CONFIG_COVERAGE 1 00:11:55.477 #define SPDK_CONFIG_CROSS_PREFIX 00:11:55.477 #undef SPDK_CONFIG_CRYPTO 00:11:55.477 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:55.477 #undef SPDK_CONFIG_CUSTOMOCF 00:11:55.477 #undef SPDK_CONFIG_DAOS 00:11:55.477 #define SPDK_CONFIG_DAOS_DIR 00:11:55.477 #define SPDK_CONFIG_DEBUG 1 00:11:55.477 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:55.477 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:55.477 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:55.477 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:55.477 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:55.477 #undef SPDK_CONFIG_DPDK_UADK 00:11:55.477 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:55.477 #define SPDK_CONFIG_EXAMPLES 1 00:11:55.477 #undef SPDK_CONFIG_FC 00:11:55.477 #define SPDK_CONFIG_FC_PATH 00:11:55.477 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:55.477 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:55.477 #define SPDK_CONFIG_FSDEV 1 00:11:55.477 #undef SPDK_CONFIG_FUSE 00:11:55.477 #undef SPDK_CONFIG_FUZZER 00:11:55.477 #define SPDK_CONFIG_FUZZER_LIB 00:11:55.477 #undef SPDK_CONFIG_GOLANG 00:11:55.477 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:55.477 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:55.477 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:55.477 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:55.477 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:55.477 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:55.477 #undef SPDK_CONFIG_HAVE_LZ4 00:11:55.477 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:55.477 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:55.477 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:55.477 #define SPDK_CONFIG_IDXD 1 00:11:55.477 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:55.477 #undef SPDK_CONFIG_IPSEC_MB 00:11:55.477 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:55.477 #define SPDK_CONFIG_ISAL 1 00:11:55.477 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:55.477 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:55.477 #define SPDK_CONFIG_LIBDIR 00:11:55.477 #undef SPDK_CONFIG_LTO 00:11:55.477 #define SPDK_CONFIG_MAX_LCORES 128 00:11:55.477 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:55.477 #define SPDK_CONFIG_NVME_CUSE 1 00:11:55.477 #undef SPDK_CONFIG_OCF 00:11:55.477 #define SPDK_CONFIG_OCF_PATH 00:11:55.477 #define SPDK_CONFIG_OPENSSL_PATH 00:11:55.477 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:55.477 #define SPDK_CONFIG_PGO_DIR 00:11:55.477 #undef SPDK_CONFIG_PGO_USE 00:11:55.477 #define SPDK_CONFIG_PREFIX /usr/local 00:11:55.477 #undef SPDK_CONFIG_RAID5F 00:11:55.477 #undef SPDK_CONFIG_RBD 00:11:55.477 #define SPDK_CONFIG_RDMA 1 00:11:55.477 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:55.477 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:55.477 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:55.477 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:55.477 #define SPDK_CONFIG_SHARED 1 00:11:55.477 #undef SPDK_CONFIG_SMA 00:11:55.477 #define SPDK_CONFIG_TESTS 1 00:11:55.477 #undef SPDK_CONFIG_TSAN 00:11:55.477 #define SPDK_CONFIG_UBLK 1 00:11:55.477 #define SPDK_CONFIG_UBSAN 1 00:11:55.477 #undef SPDK_CONFIG_UNIT_TESTS 00:11:55.477 #undef SPDK_CONFIG_URING 00:11:55.477 #define SPDK_CONFIG_URING_PATH 00:11:55.477 #undef SPDK_CONFIG_URING_ZNS 00:11:55.477 #undef SPDK_CONFIG_USDT 00:11:55.477 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:55.477 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:55.477 #define SPDK_CONFIG_VFIO_USER 1 00:11:55.477 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:55.477 #define SPDK_CONFIG_VHOST 1 00:11:55.477 #define SPDK_CONFIG_VIRTIO 1 00:11:55.477 #undef SPDK_CONFIG_VTUNE 00:11:55.477 #define SPDK_CONFIG_VTUNE_DIR 00:11:55.477 #define SPDK_CONFIG_WERROR 1 00:11:55.477 #define SPDK_CONFIG_WPDK_DIR 00:11:55.477 #undef SPDK_CONFIG_XNVME 00:11:55.477 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:55.477 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:55.478 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:55.479 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1966384 ]] 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1966384 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:55.741 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.zWDo61 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.zWDo61/tests/target /tmp/spdk.zWDo61 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122142154752 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356550144 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7214395392 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666906624 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678273024 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847689216 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871310848 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23621632 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=175104 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=328704 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677584896 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678277120 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=692224 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:55.742 * Looking for test storage... 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122142154752 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9428987904 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:55.742 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:55.743 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:55.743 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:55.743 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:55.743 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:55.743 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:55.743 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:55.743 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:55.743 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:55.743 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:55.743 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:55.743 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:55.743 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:55.743 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:55.743 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:55.743 21:04:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:55.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.743 --rc genhtml_branch_coverage=1 00:11:55.743 --rc genhtml_function_coverage=1 00:11:55.743 --rc genhtml_legend=1 00:11:55.743 --rc geninfo_all_blocks=1 00:11:55.743 --rc geninfo_unexecuted_blocks=1 00:11:55.743 00:11:55.743 ' 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:55.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.743 --rc genhtml_branch_coverage=1 00:11:55.743 --rc genhtml_function_coverage=1 00:11:55.743 --rc genhtml_legend=1 00:11:55.743 --rc geninfo_all_blocks=1 00:11:55.743 --rc geninfo_unexecuted_blocks=1 00:11:55.743 00:11:55.743 ' 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:55.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.743 --rc genhtml_branch_coverage=1 00:11:55.743 --rc genhtml_function_coverage=1 00:11:55.743 --rc genhtml_legend=1 00:11:55.743 --rc geninfo_all_blocks=1 00:11:55.743 --rc geninfo_unexecuted_blocks=1 00:11:55.743 00:11:55.743 ' 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:55.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.743 --rc genhtml_branch_coverage=1 00:11:55.743 --rc genhtml_function_coverage=1 00:11:55.743 --rc genhtml_legend=1 00:11:55.743 --rc geninfo_all_blocks=1 00:11:55.743 --rc geninfo_unexecuted_blocks=1 00:11:55.743 00:11:55.743 ' 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.743 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.744 21:04:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:04.080 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:04.080 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:04.080 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:04.081 Found net devices under 0000:31:00.0: cvl_0_0 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:04.081 Found net devices under 0000:31:00.1: cvl_0_1 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:04.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:12:04.081 00:12:04.081 --- 10.0.0.2 ping statistics --- 00:12:04.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.081 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:12:04.081 00:12:04.081 --- 10.0.0.1 ping statistics --- 00:12:04.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.081 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:12:04.081 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:04.342 ************************************ 00:12:04.342 START TEST nvmf_filesystem_no_in_capsule 00:12:04.342 ************************************ 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1970707 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1970707 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1970707 ']' 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.342 21:05:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.342 [2024-12-05 21:05:05.665124] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:12:04.342 [2024-12-05 21:05:05.665188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.342 [2024-12-05 21:05:05.756609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.602 [2024-12-05 21:05:05.798051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.602 [2024-12-05 21:05:05.798086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.602 [2024-12-05 21:05:05.798094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.602 [2024-12-05 21:05:05.798101] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.602 [2024-12-05 21:05:05.798107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.602 [2024-12-05 21:05:05.799700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.602 [2024-12-05 21:05:05.799835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.602 [2024-12-05 21:05:05.799994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.602 [2024-12-05 21:05:05.800089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.173 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.173 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:05.173 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:05.173 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:05.173 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.173 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.173 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:05.173 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:05.173 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.173 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.173 [2024-12-05 21:05:06.519819] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.173 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.173 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:05.173 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.173 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.434 Malloc1 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.434 [2024-12-05 21:05:06.649811] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:05.434 { 00:12:05.434 "name": "Malloc1", 00:12:05.434 "aliases": [ 00:12:05.434 "e324ed98-1f2f-4b37-b662-8054a11b3c3b" 00:12:05.434 ], 00:12:05.434 "product_name": "Malloc disk", 00:12:05.434 "block_size": 512, 00:12:05.434 "num_blocks": 1048576, 00:12:05.434 "uuid": "e324ed98-1f2f-4b37-b662-8054a11b3c3b", 00:12:05.434 "assigned_rate_limits": { 00:12:05.434 "rw_ios_per_sec": 0, 00:12:05.434 "rw_mbytes_per_sec": 0, 00:12:05.434 "r_mbytes_per_sec": 0, 00:12:05.434 "w_mbytes_per_sec": 0 00:12:05.434 }, 00:12:05.434 "claimed": true, 00:12:05.434 "claim_type": "exclusive_write", 00:12:05.434 "zoned": false, 00:12:05.434 "supported_io_types": { 00:12:05.434 "read": true, 00:12:05.434 "write": true, 00:12:05.434 "unmap": true, 00:12:05.434 "flush": true, 00:12:05.434 "reset": true, 00:12:05.434 "nvme_admin": false, 00:12:05.434 "nvme_io": false, 00:12:05.434 "nvme_io_md": false, 00:12:05.434 "write_zeroes": true, 00:12:05.434 "zcopy": true, 00:12:05.434 "get_zone_info": false, 00:12:05.434 "zone_management": false, 00:12:05.434 "zone_append": false, 00:12:05.434 "compare": false, 00:12:05.434 "compare_and_write": false, 00:12:05.434 "abort": true, 00:12:05.434 "seek_hole": false, 00:12:05.434 "seek_data": false, 00:12:05.434 "copy": true, 00:12:05.434 "nvme_iov_md": false 00:12:05.434 }, 00:12:05.434 "memory_domains": [ 00:12:05.434 { 00:12:05.434 "dma_device_id": "system", 00:12:05.434 "dma_device_type": 1 00:12:05.434 }, 00:12:05.434 { 00:12:05.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.434 "dma_device_type": 2 00:12:05.434 } 00:12:05.434 ], 00:12:05.434 "driver_specific": {} 00:12:05.434 } 00:12:05.434 ]' 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:05.434 21:05:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.349 21:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.349 21:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:07.349 21:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.349 21:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:07.349 21:05:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:09.266 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:09.266 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:09.266 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.266 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:09.266 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.266 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:09.266 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:09.266 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:09.266 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:09.266 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:09.267 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:09.267 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:09.267 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:09.267 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:09.267 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:09.267 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:09.267 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:09.529 21:05:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:09.789 21:05:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:10.736 ************************************ 00:12:10.736 START TEST filesystem_ext4 00:12:10.736 ************************************ 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:10.736 21:05:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:10.736 mke2fs 1.47.0 (5-Feb-2023) 00:12:10.736 Discarding device blocks: 0/522240 done 00:12:10.736 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:10.736 Filesystem UUID: f9424088-4add-4c3f-9317-4026f1adc3f1 00:12:10.736 Superblock backups stored on blocks: 00:12:10.736 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:10.736 00:12:10.736 Allocating group tables: 0/64 done 00:12:10.736 Writing inode tables: 0/64 done 00:12:10.997 Creating journal (8192 blocks): done 00:12:13.214 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:12:13.214 00:12:13.214 21:05:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:13.214 21:05:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:19.799 21:05:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1970707 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:19.799 00:12:19.799 real 0m8.006s 00:12:19.799 user 0m0.025s 00:12:19.799 sys 0m0.089s 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:19.799 ************************************ 00:12:19.799 END TEST filesystem_ext4 00:12:19.799 ************************************ 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.799 ************************************ 00:12:19.799 START TEST filesystem_btrfs 00:12:19.799 ************************************ 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:19.799 btrfs-progs v6.8.1 00:12:19.799 See https://btrfs.readthedocs.io for more information. 00:12:19.799 00:12:19.799 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:19.799 NOTE: several default settings have changed in version 5.15, please make sure 00:12:19.799 this does not affect your deployments: 00:12:19.799 - DUP for metadata (-m dup) 00:12:19.799 - enabled no-holes (-O no-holes) 00:12:19.799 - enabled free-space-tree (-R free-space-tree) 00:12:19.799 00:12:19.799 Label: (null) 00:12:19.799 UUID: a2893033-dbda-461e-8003-c343615bfa8e 00:12:19.799 Node size: 16384 00:12:19.799 Sector size: 4096 (CPU page size: 4096) 00:12:19.799 Filesystem size: 510.00MiB 00:12:19.799 Block group profiles: 00:12:19.799 Data: single 8.00MiB 00:12:19.799 Metadata: DUP 32.00MiB 00:12:19.799 System: DUP 8.00MiB 00:12:19.799 SSD detected: yes 00:12:19.799 Zoned device: no 00:12:19.799 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:19.799 Checksum: crc32c 00:12:19.799 Number of devices: 1 00:12:19.799 Devices: 00:12:19.799 ID SIZE PATH 00:12:19.799 1 510.00MiB /dev/nvme0n1p1 00:12:19.799 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1970707 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:19.799 00:12:19.799 real 0m0.830s 00:12:19.799 user 0m0.027s 00:12:19.799 sys 0m0.124s 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.799 21:05:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:19.799 ************************************ 00:12:19.799 END TEST filesystem_btrfs 00:12:19.799 ************************************ 00:12:19.800 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:19.800 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:19.800 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.800 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.800 ************************************ 00:12:19.800 START TEST filesystem_xfs 00:12:19.800 ************************************ 00:12:19.800 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:19.800 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:19.800 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:19.800 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:19.800 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:19.800 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:19.800 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:19.800 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:19.800 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:19.800 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:19.800 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:19.800 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:19.800 = sectsz=512 attr=2, projid32bit=1 00:12:19.800 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:19.800 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:19.800 data = bsize=4096 blocks=130560, imaxpct=25 00:12:19.800 = sunit=0 swidth=0 blks 00:12:19.800 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:19.800 log =internal log bsize=4096 blocks=16384, version=2 00:12:19.800 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:19.800 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:20.743 Discarding blocks...Done. 00:12:20.743 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:20.743 21:05:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1970707 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:22.658 00:12:22.658 real 0m2.737s 00:12:22.658 user 0m0.026s 00:12:22.658 sys 0m0.080s 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:22.658 ************************************ 00:12:22.658 END TEST filesystem_xfs 00:12:22.658 ************************************ 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:22.658 21:05:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.658 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.658 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:22.658 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:22.658 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.658 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:22.658 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.658 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:22.659 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.659 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.659 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.659 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.659 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:22.659 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1970707 00:12:22.659 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1970707 ']' 00:12:22.659 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1970707 00:12:22.659 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:22.659 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.659 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1970707 00:12:22.919 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.919 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.919 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1970707' 00:12:22.919 killing process with pid 1970707 00:12:22.919 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1970707 00:12:22.919 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1970707 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:23.179 00:12:23.179 real 0m18.776s 00:12:23.179 user 1m14.174s 00:12:23.179 sys 0m1.446s 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.179 ************************************ 00:12:23.179 END TEST nvmf_filesystem_no_in_capsule 00:12:23.179 ************************************ 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:23.179 ************************************ 00:12:23.179 START TEST nvmf_filesystem_in_capsule 00:12:23.179 ************************************ 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1974628 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1974628 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1974628 ']' 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.179 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.179 [2024-12-05 21:05:24.531995] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:12:23.179 [2024-12-05 21:05:24.532045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.440 [2024-12-05 21:05:24.619116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.440 [2024-12-05 21:05:24.654514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.440 [2024-12-05 21:05:24.654549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.440 [2024-12-05 21:05:24.654557] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.440 [2024-12-05 21:05:24.654565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.440 [2024-12-05 21:05:24.654570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.440 [2024-12-05 21:05:24.657878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.440 [2024-12-05 21:05:24.657902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.440 [2024-12-05 21:05:24.658075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.440 [2024-12-05 21:05:24.658170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.440 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.440 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:23.440 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:23.440 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:23.440 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.440 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.440 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:23.440 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:23.440 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.440 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.440 [2024-12-05 21:05:24.795521] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.440 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.440 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:23.440 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.440 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.701 Malloc1 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.701 [2024-12-05 21:05:24.918838] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.701 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:23.701 { 00:12:23.701 "name": "Malloc1", 00:12:23.701 "aliases": [ 00:12:23.701 "6973345f-07b0-4dc7-ae15-cbb6a4c90750" 00:12:23.701 ], 00:12:23.701 "product_name": "Malloc disk", 00:12:23.701 "block_size": 512, 00:12:23.701 "num_blocks": 1048576, 00:12:23.701 "uuid": "6973345f-07b0-4dc7-ae15-cbb6a4c90750", 00:12:23.701 "assigned_rate_limits": { 00:12:23.701 "rw_ios_per_sec": 0, 00:12:23.701 "rw_mbytes_per_sec": 0, 00:12:23.701 "r_mbytes_per_sec": 0, 00:12:23.701 "w_mbytes_per_sec": 0 00:12:23.701 }, 00:12:23.701 "claimed": true, 00:12:23.701 "claim_type": "exclusive_write", 00:12:23.701 "zoned": false, 00:12:23.701 "supported_io_types": { 00:12:23.701 "read": true, 00:12:23.701 "write": true, 00:12:23.701 "unmap": true, 00:12:23.701 "flush": true, 00:12:23.701 "reset": true, 00:12:23.701 "nvme_admin": false, 00:12:23.701 "nvme_io": false, 00:12:23.701 "nvme_io_md": false, 00:12:23.701 "write_zeroes": true, 00:12:23.701 "zcopy": true, 00:12:23.701 "get_zone_info": false, 00:12:23.701 "zone_management": false, 00:12:23.701 "zone_append": false, 00:12:23.701 "compare": false, 00:12:23.701 "compare_and_write": false, 00:12:23.701 "abort": true, 00:12:23.701 "seek_hole": false, 00:12:23.701 "seek_data": false, 00:12:23.701 "copy": true, 00:12:23.701 "nvme_iov_md": false 00:12:23.702 }, 00:12:23.702 "memory_domains": [ 00:12:23.702 { 00:12:23.702 "dma_device_id": "system", 00:12:23.702 "dma_device_type": 1 00:12:23.702 }, 00:12:23.702 { 00:12:23.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.702 "dma_device_type": 2 00:12:23.702 } 00:12:23.702 ], 00:12:23.702 "driver_specific": {} 00:12:23.702 } 00:12:23.702 ]' 00:12:23.702 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:23.702 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:23.702 21:05:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:23.702 21:05:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:23.702 21:05:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:23.702 21:05:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:23.702 21:05:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:23.702 21:05:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.619 21:05:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.619 21:05:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:25.619 21:05:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.619 21:05:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:25.619 21:05:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:27.532 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:27.532 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:27.532 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.532 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:27.532 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.532 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:27.532 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:27.532 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:27.532 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:27.532 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:27.532 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:27.533 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:27.533 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:27.533 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:27.533 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:27.533 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:27.533 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:27.533 21:05:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:28.104 21:05:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.488 ************************************ 00:12:29.488 START TEST filesystem_in_capsule_ext4 00:12:29.488 ************************************ 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:29.488 21:05:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:29.488 mke2fs 1.47.0 (5-Feb-2023) 00:12:29.488 Discarding device blocks: 0/522240 done 00:12:29.488 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:29.488 Filesystem UUID: e40e79ce-3049-49c4-beee-f2a29cde8ac7 00:12:29.488 Superblock backups stored on blocks: 00:12:29.488 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:29.488 00:12:29.488 Allocating group tables: 0/64 done 00:12:29.488 Writing inode tables: 0/64 done 00:12:32.032 Creating journal (8192 blocks): done 00:12:32.032 Writing superblocks and filesystem accounting information: 0/64 done 00:12:32.032 00:12:32.032 21:05:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:32.032 21:05:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1974628 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:38.614 00:12:38.614 real 0m9.163s 00:12:38.614 user 0m0.037s 00:12:38.614 sys 0m0.072s 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:38.614 ************************************ 00:12:38.614 END TEST filesystem_in_capsule_ext4 00:12:38.614 ************************************ 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.614 ************************************ 00:12:38.614 START TEST filesystem_in_capsule_btrfs 00:12:38.614 ************************************ 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:38.614 21:05:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:38.874 btrfs-progs v6.8.1 00:12:38.874 See https://btrfs.readthedocs.io for more information. 00:12:38.874 00:12:38.874 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:38.874 NOTE: several default settings have changed in version 5.15, please make sure 00:12:38.874 this does not affect your deployments: 00:12:38.874 - DUP for metadata (-m dup) 00:12:38.874 - enabled no-holes (-O no-holes) 00:12:38.874 - enabled free-space-tree (-R free-space-tree) 00:12:38.874 00:12:38.874 Label: (null) 00:12:38.874 UUID: e2def73a-64b3-4b65-a562-8e1e4c19d225 00:12:38.874 Node size: 16384 00:12:38.874 Sector size: 4096 (CPU page size: 4096) 00:12:38.874 Filesystem size: 510.00MiB 00:12:38.874 Block group profiles: 00:12:38.874 Data: single 8.00MiB 00:12:38.874 Metadata: DUP 32.00MiB 00:12:38.874 System: DUP 8.00MiB 00:12:38.874 SSD detected: yes 00:12:38.874 Zoned device: no 00:12:38.874 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:38.874 Checksum: crc32c 00:12:38.874 Number of devices: 1 00:12:38.874 Devices: 00:12:38.874 ID SIZE PATH 00:12:38.874 1 510.00MiB /dev/nvme0n1p1 00:12:38.874 00:12:38.874 21:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:38.874 21:05:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:39.815 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:39.815 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:39.815 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:39.815 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:39.815 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:39.815 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:39.815 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1974628 00:12:39.815 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:39.815 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:39.815 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:39.815 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:39.815 00:12:39.815 real 0m1.425s 00:12:39.815 user 0m0.029s 00:12:39.815 sys 0m0.121s 00:12:39.815 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.815 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:39.815 ************************************ 00:12:39.815 END TEST filesystem_in_capsule_btrfs 00:12:39.815 ************************************ 00:12:40.076 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:40.076 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:40.076 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.076 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.076 ************************************ 00:12:40.076 START TEST filesystem_in_capsule_xfs 00:12:40.076 ************************************ 00:12:40.076 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:40.076 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:40.076 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:40.076 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:40.076 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:40.076 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:40.076 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:40.076 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:40.076 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:40.076 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:40.076 21:05:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:40.076 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:40.076 = sectsz=512 attr=2, projid32bit=1 00:12:40.076 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:40.076 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:40.076 data = bsize=4096 blocks=130560, imaxpct=25 00:12:40.076 = sunit=0 swidth=0 blks 00:12:40.076 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:40.076 log =internal log bsize=4096 blocks=16384, version=2 00:12:40.076 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:40.076 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:41.016 Discarding blocks...Done. 00:12:41.016 21:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:41.016 21:05:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:43.562 21:05:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:43.822 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:43.822 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:43.822 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:43.822 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:43.822 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:43.822 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1974628 00:12:43.822 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:43.822 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:43.822 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:43.822 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:43.822 00:12:43.822 real 0m3.843s 00:12:43.822 user 0m0.028s 00:12:43.822 sys 0m0.080s 00:12:43.822 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.822 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:43.822 ************************************ 00:12:43.822 END TEST filesystem_in_capsule_xfs 00:12:43.822 ************************************ 00:12:43.822 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:43.822 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:43.822 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1974628 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1974628 ']' 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1974628 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1974628 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:44.082 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1974628' 00:12:44.083 killing process with pid 1974628 00:12:44.083 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1974628 00:12:44.083 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1974628 00:12:44.344 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:44.344 00:12:44.344 real 0m21.223s 00:12:44.344 user 1m23.778s 00:12:44.344 sys 0m1.489s 00:12:44.344 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.344 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.344 ************************************ 00:12:44.344 END TEST nvmf_filesystem_in_capsule 00:12:44.344 ************************************ 00:12:44.344 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:44.344 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:44.344 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:44.344 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:44.344 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:44.344 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:44.344 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:44.344 rmmod nvme_tcp 00:12:44.344 rmmod nvme_fabrics 00:12:44.344 rmmod nvme_keyring 00:12:44.604 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:44.604 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:44.604 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:44.604 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:44.604 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:44.604 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:44.604 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:44.604 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:44.604 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:44.604 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:44.604 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:44.604 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:44.604 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:44.604 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.605 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.605 21:05:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.519 21:05:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:46.519 00:12:46.519 real 0m51.280s 00:12:46.519 user 2m40.634s 00:12:46.519 sys 0m9.483s 00:12:46.519 21:05:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.519 21:05:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:46.519 ************************************ 00:12:46.519 END TEST nvmf_filesystem 00:12:46.519 ************************************ 00:12:46.519 21:05:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:46.519 21:05:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:46.519 21:05:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.519 21:05:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:46.780 ************************************ 00:12:46.780 START TEST nvmf_target_discovery 00:12:46.780 ************************************ 00:12:46.780 21:05:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:46.780 * Looking for test storage... 00:12:46.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:46.780 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:46.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.781 --rc genhtml_branch_coverage=1 00:12:46.781 --rc genhtml_function_coverage=1 00:12:46.781 --rc genhtml_legend=1 00:12:46.781 --rc geninfo_all_blocks=1 00:12:46.781 --rc geninfo_unexecuted_blocks=1 00:12:46.781 00:12:46.781 ' 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:46.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.781 --rc genhtml_branch_coverage=1 00:12:46.781 --rc genhtml_function_coverage=1 00:12:46.781 --rc genhtml_legend=1 00:12:46.781 --rc geninfo_all_blocks=1 00:12:46.781 --rc geninfo_unexecuted_blocks=1 00:12:46.781 00:12:46.781 ' 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:46.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.781 --rc genhtml_branch_coverage=1 00:12:46.781 --rc genhtml_function_coverage=1 00:12:46.781 --rc genhtml_legend=1 00:12:46.781 --rc geninfo_all_blocks=1 00:12:46.781 --rc geninfo_unexecuted_blocks=1 00:12:46.781 00:12:46.781 ' 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:46.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.781 --rc genhtml_branch_coverage=1 00:12:46.781 --rc genhtml_function_coverage=1 00:12:46.781 --rc genhtml_legend=1 00:12:46.781 --rc geninfo_all_blocks=1 00:12:46.781 --rc geninfo_unexecuted_blocks=1 00:12:46.781 00:12:46.781 ' 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:46.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:46.781 21:05:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:54.920 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:54.921 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:54.921 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:54.921 Found net devices under 0000:31:00.0: cvl_0_0 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:54.921 Found net devices under 0000:31:00.1: cvl_0_1 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.921 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.182 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.182 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.182 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:55.182 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.182 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:55.182 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:55.182 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:55.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:12:55.444 00:12:55.444 --- 10.0.0.2 ping statistics --- 00:12:55.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.444 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:12:55.444 00:12:55.444 --- 10.0.0.1 ping statistics --- 00:12:55.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.444 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1983859 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1983859 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1983859 ']' 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.444 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.445 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.445 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.445 21:05:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:55.445 [2024-12-05 21:05:56.760579] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:12:55.445 [2024-12-05 21:05:56.760641] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.445 [2024-12-05 21:05:56.848288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.705 [2024-12-05 21:05:56.884162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.705 [2024-12-05 21:05:56.884194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.705 [2024-12-05 21:05:56.884202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.705 [2024-12-05 21:05:56.884210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.705 [2024-12-05 21:05:56.884215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.705 [2024-12-05 21:05:56.885755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.705 [2024-12-05 21:05:56.885873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.705 [2024-12-05 21:05:56.885980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.705 [2024-12-05 21:05:56.885980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.277 [2024-12-05 21:05:57.608844] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.277 Null1 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.277 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.277 [2024-12-05 21:05:57.669201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.278 Null2 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.278 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.539 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.539 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:56.539 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:56.539 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.539 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.539 Null3 00:12:56.539 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.540 Null4 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.540 21:05:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:56.802 00:12:56.802 Discovery Log Number of Records 6, Generation counter 6 00:12:56.802 =====Discovery Log Entry 0====== 00:12:56.802 trtype: tcp 00:12:56.802 adrfam: ipv4 00:12:56.802 subtype: current discovery subsystem 00:12:56.802 treq: not required 00:12:56.802 portid: 0 00:12:56.802 trsvcid: 4420 00:12:56.802 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:56.802 traddr: 10.0.0.2 00:12:56.802 eflags: explicit discovery connections, duplicate discovery information 00:12:56.802 sectype: none 00:12:56.802 =====Discovery Log Entry 1====== 00:12:56.802 trtype: tcp 00:12:56.802 adrfam: ipv4 00:12:56.802 subtype: nvme subsystem 00:12:56.802 treq: not required 00:12:56.802 portid: 0 00:12:56.802 trsvcid: 4420 00:12:56.802 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:56.802 traddr: 10.0.0.2 00:12:56.802 eflags: none 00:12:56.802 sectype: none 00:12:56.802 =====Discovery Log Entry 2====== 00:12:56.802 trtype: tcp 00:12:56.802 adrfam: ipv4 00:12:56.802 subtype: nvme subsystem 00:12:56.802 treq: not required 00:12:56.802 portid: 0 00:12:56.802 trsvcid: 4420 00:12:56.802 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:56.802 traddr: 10.0.0.2 00:12:56.802 eflags: none 00:12:56.802 sectype: none 00:12:56.802 =====Discovery Log Entry 3====== 00:12:56.802 trtype: tcp 00:12:56.802 adrfam: ipv4 00:12:56.802 subtype: nvme subsystem 00:12:56.802 treq: not required 00:12:56.802 portid: 0 00:12:56.802 trsvcid: 4420 00:12:56.802 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:56.802 traddr: 10.0.0.2 00:12:56.802 eflags: none 00:12:56.802 sectype: none 00:12:56.802 =====Discovery Log Entry 4====== 00:12:56.802 trtype: tcp 00:12:56.802 adrfam: ipv4 00:12:56.802 subtype: nvme subsystem 00:12:56.802 treq: not required 00:12:56.802 portid: 0 00:12:56.802 trsvcid: 4420 00:12:56.802 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:56.802 traddr: 10.0.0.2 00:12:56.802 eflags: none 00:12:56.802 sectype: none 00:12:56.802 =====Discovery Log Entry 5====== 00:12:56.802 trtype: tcp 00:12:56.802 adrfam: ipv4 00:12:56.802 subtype: discovery subsystem referral 00:12:56.802 treq: not required 00:12:56.802 portid: 0 00:12:56.802 trsvcid: 4430 00:12:56.802 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:56.802 traddr: 10.0.0.2 00:12:56.802 eflags: none 00:12:56.802 sectype: none 00:12:56.802 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:56.802 Perform nvmf subsystem discovery via RPC 00:12:56.802 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:56.802 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.802 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.802 [ 00:12:56.802 { 00:12:56.802 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:56.802 "subtype": "Discovery", 00:12:56.802 "listen_addresses": [ 00:12:56.802 { 00:12:56.802 "trtype": "TCP", 00:12:56.802 "adrfam": "IPv4", 00:12:56.802 "traddr": "10.0.0.2", 00:12:56.802 "trsvcid": "4420" 00:12:56.802 } 00:12:56.802 ], 00:12:56.802 "allow_any_host": true, 00:12:56.802 "hosts": [] 00:12:56.802 }, 00:12:56.802 { 00:12:56.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:56.802 "subtype": "NVMe", 00:12:56.802 "listen_addresses": [ 00:12:56.802 { 00:12:56.802 "trtype": "TCP", 00:12:56.802 "adrfam": "IPv4", 00:12:56.802 "traddr": "10.0.0.2", 00:12:56.802 "trsvcid": "4420" 00:12:56.802 } 00:12:56.802 ], 00:12:56.802 "allow_any_host": true, 00:12:56.802 "hosts": [], 00:12:56.802 "serial_number": "SPDK00000000000001", 00:12:56.802 "model_number": "SPDK bdev Controller", 00:12:56.802 "max_namespaces": 32, 00:12:56.802 "min_cntlid": 1, 00:12:56.803 "max_cntlid": 65519, 00:12:56.803 "namespaces": [ 00:12:56.803 { 00:12:56.803 "nsid": 1, 00:12:56.803 "bdev_name": "Null1", 00:12:56.803 "name": "Null1", 00:12:56.803 "nguid": "C9391F7947C04AF5B30772C29DC51734", 00:12:56.803 "uuid": "c9391f79-47c0-4af5-b307-72c29dc51734" 00:12:56.803 } 00:12:56.803 ] 00:12:56.803 }, 00:12:56.803 { 00:12:56.803 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:56.803 "subtype": "NVMe", 00:12:56.803 "listen_addresses": [ 00:12:56.803 { 00:12:56.803 "trtype": "TCP", 00:12:56.803 "adrfam": "IPv4", 00:12:56.803 "traddr": "10.0.0.2", 00:12:56.803 "trsvcid": "4420" 00:12:56.803 } 00:12:56.803 ], 00:12:56.803 "allow_any_host": true, 00:12:56.803 "hosts": [], 00:12:56.803 "serial_number": "SPDK00000000000002", 00:12:56.803 "model_number": "SPDK bdev Controller", 00:12:56.803 "max_namespaces": 32, 00:12:56.803 "min_cntlid": 1, 00:12:56.803 "max_cntlid": 65519, 00:12:56.803 "namespaces": [ 00:12:56.803 { 00:12:56.803 "nsid": 1, 00:12:56.803 "bdev_name": "Null2", 00:12:56.803 "name": "Null2", 00:12:56.803 "nguid": "8543B5649EF541E79DA84564F0611774", 00:12:56.803 "uuid": "8543b564-9ef5-41e7-9da8-4564f0611774" 00:12:56.803 } 00:12:56.803 ] 00:12:56.803 }, 00:12:56.803 { 00:12:56.803 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:56.803 "subtype": "NVMe", 00:12:56.803 "listen_addresses": [ 00:12:56.803 { 00:12:56.803 "trtype": "TCP", 00:12:56.803 "adrfam": "IPv4", 00:12:56.803 "traddr": "10.0.0.2", 00:12:56.803 "trsvcid": "4420" 00:12:56.803 } 00:12:56.803 ], 00:12:56.803 "allow_any_host": true, 00:12:56.803 "hosts": [], 00:12:56.803 "serial_number": "SPDK00000000000003", 00:12:56.803 "model_number": "SPDK bdev Controller", 00:12:56.803 "max_namespaces": 32, 00:12:56.803 "min_cntlid": 1, 00:12:56.803 "max_cntlid": 65519, 00:12:56.803 "namespaces": [ 00:12:56.803 { 00:12:56.803 "nsid": 1, 00:12:56.803 "bdev_name": "Null3", 00:12:56.803 "name": "Null3", 00:12:56.803 "nguid": "E5CA06D6069E42078A99C69E80A591F1", 00:12:56.803 "uuid": "e5ca06d6-069e-4207-8a99-c69e80a591f1" 00:12:56.803 } 00:12:56.803 ] 00:12:56.803 }, 00:12:56.803 { 00:12:56.803 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:56.803 "subtype": "NVMe", 00:12:56.803 "listen_addresses": [ 00:12:56.803 { 00:12:56.803 "trtype": "TCP", 00:12:56.803 "adrfam": "IPv4", 00:12:56.803 "traddr": "10.0.0.2", 00:12:56.803 "trsvcid": "4420" 00:12:56.803 } 00:12:56.803 ], 00:12:56.803 "allow_any_host": true, 00:12:56.803 "hosts": [], 00:12:56.803 "serial_number": "SPDK00000000000004", 00:12:56.803 "model_number": "SPDK bdev Controller", 00:12:56.803 "max_namespaces": 32, 00:12:56.803 "min_cntlid": 1, 00:12:56.803 "max_cntlid": 65519, 00:12:56.803 "namespaces": [ 00:12:56.803 { 00:12:56.803 "nsid": 1, 00:12:56.803 "bdev_name": "Null4", 00:12:56.803 "name": "Null4", 00:12:56.803 "nguid": "1040843F4E28494282C419443C215ADC", 00:12:56.803 "uuid": "1040843f-4e28-4942-82c4-19443c215adc" 00:12:56.803 } 00:12:56.803 ] 00:12:56.803 } 00:12:56.803 ] 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:56.803 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.064 rmmod nvme_tcp 00:12:57.064 rmmod nvme_fabrics 00:12:57.064 rmmod nvme_keyring 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1983859 ']' 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1983859 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1983859 ']' 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1983859 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1983859 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1983859' 00:12:57.064 killing process with pid 1983859 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1983859 00:12:57.064 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1983859 00:12:57.323 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:57.323 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:57.323 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:57.323 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:57.323 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:57.323 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:57.323 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:57.323 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.323 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:57.323 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.323 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.323 21:05:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.236 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:59.236 00:12:59.236 real 0m12.624s 00:12:59.236 user 0m9.117s 00:12:59.236 sys 0m6.816s 00:12:59.236 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.236 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.236 ************************************ 00:12:59.236 END TEST nvmf_target_discovery 00:12:59.236 ************************************ 00:12:59.236 21:06:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:59.236 21:06:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:59.236 21:06:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.236 21:06:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.498 ************************************ 00:12:59.498 START TEST nvmf_referrals 00:12:59.498 ************************************ 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:59.498 * Looking for test storage... 00:12:59.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:59.498 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:59.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.499 --rc genhtml_branch_coverage=1 00:12:59.499 --rc genhtml_function_coverage=1 00:12:59.499 --rc genhtml_legend=1 00:12:59.499 --rc geninfo_all_blocks=1 00:12:59.499 --rc geninfo_unexecuted_blocks=1 00:12:59.499 00:12:59.499 ' 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:59.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.499 --rc genhtml_branch_coverage=1 00:12:59.499 --rc genhtml_function_coverage=1 00:12:59.499 --rc genhtml_legend=1 00:12:59.499 --rc geninfo_all_blocks=1 00:12:59.499 --rc geninfo_unexecuted_blocks=1 00:12:59.499 00:12:59.499 ' 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:59.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.499 --rc genhtml_branch_coverage=1 00:12:59.499 --rc genhtml_function_coverage=1 00:12:59.499 --rc genhtml_legend=1 00:12:59.499 --rc geninfo_all_blocks=1 00:12:59.499 --rc geninfo_unexecuted_blocks=1 00:12:59.499 00:12:59.499 ' 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:59.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.499 --rc genhtml_branch_coverage=1 00:12:59.499 --rc genhtml_function_coverage=1 00:12:59.499 --rc genhtml_legend=1 00:12:59.499 --rc geninfo_all_blocks=1 00:12:59.499 --rc geninfo_unexecuted_blocks=1 00:12:59.499 00:12:59.499 ' 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:59.499 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:59.500 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:59.500 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:59.500 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:59.500 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:59.500 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.500 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:59.500 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:59.500 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:59.500 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.500 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.500 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.500 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:59.500 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:59.500 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:59.500 21:06:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:07.786 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:07.787 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:07.787 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:07.787 Found net devices under 0000:31:00.0: cvl_0_0 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:07.787 Found net devices under 0000:31:00.1: cvl_0_1 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:07.787 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:07.788 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.788 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.048 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.048 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.048 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:08.048 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.048 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.048 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.048 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:08.048 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:08.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:13:08.048 00:13:08.048 --- 10.0.0.2 ping statistics --- 00:13:08.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.048 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:13:08.048 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:13:08.049 00:13:08.049 --- 10.0.0.1 ping statistics --- 00:13:08.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.049 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1989361 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1989361 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1989361 ']' 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.049 21:06:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:08.310 [2024-12-05 21:06:09.515583] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:13:08.310 [2024-12-05 21:06:09.515670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.310 [2024-12-05 21:06:09.608319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:08.310 [2024-12-05 21:06:09.646088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.310 [2024-12-05 21:06:09.646123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.310 [2024-12-05 21:06:09.646131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.310 [2024-12-05 21:06:09.646138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.310 [2024-12-05 21:06:09.646144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.310 [2024-12-05 21:06:09.647756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.310 [2024-12-05 21:06:09.647894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.310 [2024-12-05 21:06:09.647996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.310 [2024-12-05 21:06:09.647997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.882 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.882 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:08.882 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.142 [2024-12-05 21:06:10.364019] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.142 [2024-12-05 21:06:10.390061] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:09.142 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:09.143 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.404 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.665 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:09.665 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:09.665 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:09.665 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:09.665 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:09.665 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:09.665 21:06:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:09.665 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:09.665 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:09.665 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:09.665 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.665 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.665 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.665 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:09.665 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.665 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.665 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.665 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:09.926 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:10.187 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:10.187 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:10.187 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:10.187 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:10.187 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:10.187 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:10.187 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:10.447 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:10.708 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:10.708 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:10.708 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:10.708 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:10.708 21:06:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:10.708 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:10.708 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:10.708 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:10.708 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:10.708 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:10.708 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:10.969 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:10.969 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:10.969 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.969 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:10.969 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.969 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:10.969 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.969 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:10.969 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:10.969 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.969 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:10.970 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:10.970 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:10.970 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:10.970 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:10.970 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:10.970 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:11.232 rmmod nvme_tcp 00:13:11.232 rmmod nvme_fabrics 00:13:11.232 rmmod nvme_keyring 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1989361 ']' 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1989361 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1989361 ']' 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1989361 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1989361 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1989361' 00:13:11.232 killing process with pid 1989361 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1989361 00:13:11.232 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1989361 00:13:11.494 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:11.494 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:11.494 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:11.494 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:11.494 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:11.494 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:11.494 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:11.494 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:11.494 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:11.494 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.494 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.494 21:06:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.038 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:14.038 00:13:14.038 real 0m14.197s 00:13:14.038 user 0m15.971s 00:13:14.038 sys 0m7.207s 00:13:14.038 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.038 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.038 ************************************ 00:13:14.038 END TEST nvmf_referrals 00:13:14.038 ************************************ 00:13:14.038 21:06:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:14.038 21:06:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:14.038 21:06:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.038 21:06:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.038 ************************************ 00:13:14.038 START TEST nvmf_connect_disconnect 00:13:14.038 ************************************ 00:13:14.038 21:06:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:14.038 * Looking for test storage... 00:13:14.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:14.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.038 --rc genhtml_branch_coverage=1 00:13:14.038 --rc genhtml_function_coverage=1 00:13:14.038 --rc genhtml_legend=1 00:13:14.038 --rc geninfo_all_blocks=1 00:13:14.038 --rc geninfo_unexecuted_blocks=1 00:13:14.038 00:13:14.038 ' 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:14.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.038 --rc genhtml_branch_coverage=1 00:13:14.038 --rc genhtml_function_coverage=1 00:13:14.038 --rc genhtml_legend=1 00:13:14.038 --rc geninfo_all_blocks=1 00:13:14.038 --rc geninfo_unexecuted_blocks=1 00:13:14.038 00:13:14.038 ' 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:14.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.038 --rc genhtml_branch_coverage=1 00:13:14.038 --rc genhtml_function_coverage=1 00:13:14.038 --rc genhtml_legend=1 00:13:14.038 --rc geninfo_all_blocks=1 00:13:14.038 --rc geninfo_unexecuted_blocks=1 00:13:14.038 00:13:14.038 ' 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:14.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.038 --rc genhtml_branch_coverage=1 00:13:14.038 --rc genhtml_function_coverage=1 00:13:14.038 --rc genhtml_legend=1 00:13:14.038 --rc geninfo_all_blocks=1 00:13:14.038 --rc geninfo_unexecuted_blocks=1 00:13:14.038 00:13:14.038 ' 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.038 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:14.039 21:06:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:22.180 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:22.181 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:22.181 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:22.181 Found net devices under 0000:31:00.0: cvl_0_0 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:22.181 Found net devices under 0000:31:00.1: cvl_0_1 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:22.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:13:22.181 00:13:22.181 --- 10.0.0.2 ping statistics --- 00:13:22.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.181 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:13:22.181 00:13:22.181 --- 10.0.0.1 ping statistics --- 00:13:22.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.181 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.181 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:22.182 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:22.182 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:22.182 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:22.182 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:22.182 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:22.182 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1994964 00:13:22.182 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1994964 00:13:22.182 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:22.182 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1994964 ']' 00:13:22.182 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.182 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.182 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.182 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.182 21:06:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:22.484 [2024-12-05 21:06:23.649832] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:13:22.484 [2024-12-05 21:06:23.649909] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.484 [2024-12-05 21:06:23.740896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.484 [2024-12-05 21:06:23.782451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.484 [2024-12-05 21:06:23.782490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.484 [2024-12-05 21:06:23.782498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.484 [2024-12-05 21:06:23.782505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.484 [2024-12-05 21:06:23.782510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.484 [2024-12-05 21:06:23.784273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.484 [2024-12-05 21:06:23.784393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.484 [2024-12-05 21:06:23.784549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.484 [2024-12-05 21:06:23.784550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.055 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.055 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:23.055 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:23.055 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:23.055 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:23.055 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.055 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:23.055 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.055 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:23.315 [2024-12-05 21:06:24.492309] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:23.315 [2024-12-05 21:06:24.562315] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.315 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:23.316 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:23.316 21:06:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:27.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:41.616 rmmod nvme_tcp 00:13:41.616 rmmod nvme_fabrics 00:13:41.616 rmmod nvme_keyring 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1994964 ']' 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1994964 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1994964 ']' 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1994964 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.616 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1994964 00:13:41.617 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:41.617 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:41.617 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1994964' 00:13:41.617 killing process with pid 1994964 00:13:41.617 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1994964 00:13:41.617 21:06:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1994964 00:13:41.617 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:41.617 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:41.617 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:41.617 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:41.617 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:41.617 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:41.617 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:41.877 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:41.877 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:41.877 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.877 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.877 21:06:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.787 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:43.787 00:13:43.787 real 0m30.176s 00:13:43.787 user 1m19.005s 00:13:43.787 sys 0m7.807s 00:13:43.787 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.787 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:43.787 ************************************ 00:13:43.787 END TEST nvmf_connect_disconnect 00:13:43.787 ************************************ 00:13:43.787 21:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:43.787 21:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:43.787 21:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.787 21:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:43.787 ************************************ 00:13:43.787 START TEST nvmf_multitarget 00:13:43.787 ************************************ 00:13:43.787 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:44.047 * Looking for test storage... 00:13:44.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.047 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:44.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.048 --rc genhtml_branch_coverage=1 00:13:44.048 --rc genhtml_function_coverage=1 00:13:44.048 --rc genhtml_legend=1 00:13:44.048 --rc geninfo_all_blocks=1 00:13:44.048 --rc geninfo_unexecuted_blocks=1 00:13:44.048 00:13:44.048 ' 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:44.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.048 --rc genhtml_branch_coverage=1 00:13:44.048 --rc genhtml_function_coverage=1 00:13:44.048 --rc genhtml_legend=1 00:13:44.048 --rc geninfo_all_blocks=1 00:13:44.048 --rc geninfo_unexecuted_blocks=1 00:13:44.048 00:13:44.048 ' 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:44.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.048 --rc genhtml_branch_coverage=1 00:13:44.048 --rc genhtml_function_coverage=1 00:13:44.048 --rc genhtml_legend=1 00:13:44.048 --rc geninfo_all_blocks=1 00:13:44.048 --rc geninfo_unexecuted_blocks=1 00:13:44.048 00:13:44.048 ' 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:44.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.048 --rc genhtml_branch_coverage=1 00:13:44.048 --rc genhtml_function_coverage=1 00:13:44.048 --rc genhtml_legend=1 00:13:44.048 --rc geninfo_all_blocks=1 00:13:44.048 --rc geninfo_unexecuted_blocks=1 00:13:44.048 00:13:44.048 ' 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.048 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:44.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:44.049 21:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:52.183 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.183 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:52.184 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:52.184 Found net devices under 0000:31:00.0: cvl_0_0 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:52.184 Found net devices under 0000:31:00.1: cvl_0_1 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.184 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:52.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:13:52.447 00:13:52.447 --- 10.0.0.2 ping statistics --- 00:13:52.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.447 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:13:52.447 00:13:52.447 --- 10.0.0.1 ping statistics --- 00:13:52.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.447 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2003479 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2003479 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2003479 ']' 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.447 21:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:52.709 [2024-12-05 21:06:53.887613] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:13:52.710 [2024-12-05 21:06:53.887703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.710 [2024-12-05 21:06:53.983509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.710 [2024-12-05 21:06:54.025034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.710 [2024-12-05 21:06:54.025071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.710 [2024-12-05 21:06:54.025079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.710 [2024-12-05 21:06:54.025086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.710 [2024-12-05 21:06:54.025092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.710 [2024-12-05 21:06:54.026720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.710 [2024-12-05 21:06:54.026857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.710 [2024-12-05 21:06:54.027026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.710 [2024-12-05 21:06:54.027027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.282 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.282 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:53.282 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.282 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:53.282 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:53.543 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.543 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:53.543 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:53.543 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:53.543 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:53.543 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:53.543 "nvmf_tgt_1" 00:13:53.543 21:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:53.804 "nvmf_tgt_2" 00:13:53.804 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:53.804 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:53.804 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:53.804 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:54.064 true 00:13:54.064 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:54.064 true 00:13:54.064 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:54.064 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:54.064 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:54.064 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:54.064 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:54.064 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:54.064 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:54.064 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:54.064 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:54.064 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:54.065 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:54.065 rmmod nvme_tcp 00:13:54.325 rmmod nvme_fabrics 00:13:54.325 rmmod nvme_keyring 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2003479 ']' 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2003479 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2003479 ']' 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2003479 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2003479 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2003479' 00:13:54.325 killing process with pid 2003479 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2003479 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2003479 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.325 21:06:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.869 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:56.869 00:13:56.869 real 0m12.614s 00:13:56.869 user 0m10.059s 00:13:56.869 sys 0m6.758s 00:13:56.869 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.869 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:56.869 ************************************ 00:13:56.869 END TEST nvmf_multitarget 00:13:56.869 ************************************ 00:13:56.869 21:06:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:56.869 21:06:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:56.869 21:06:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.869 21:06:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:56.869 ************************************ 00:13:56.869 START TEST nvmf_rpc 00:13:56.869 ************************************ 00:13:56.869 21:06:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:56.869 * Looking for test storage... 00:13:56.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.869 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:56.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.869 --rc genhtml_branch_coverage=1 00:13:56.870 --rc genhtml_function_coverage=1 00:13:56.870 --rc genhtml_legend=1 00:13:56.870 --rc geninfo_all_blocks=1 00:13:56.870 --rc geninfo_unexecuted_blocks=1 00:13:56.870 00:13:56.870 ' 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:56.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.870 --rc genhtml_branch_coverage=1 00:13:56.870 --rc genhtml_function_coverage=1 00:13:56.870 --rc genhtml_legend=1 00:13:56.870 --rc geninfo_all_blocks=1 00:13:56.870 --rc geninfo_unexecuted_blocks=1 00:13:56.870 00:13:56.870 ' 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:56.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.870 --rc genhtml_branch_coverage=1 00:13:56.870 --rc genhtml_function_coverage=1 00:13:56.870 --rc genhtml_legend=1 00:13:56.870 --rc geninfo_all_blocks=1 00:13:56.870 --rc geninfo_unexecuted_blocks=1 00:13:56.870 00:13:56.870 ' 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:56.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.870 --rc genhtml_branch_coverage=1 00:13:56.870 --rc genhtml_function_coverage=1 00:13:56.870 --rc genhtml_legend=1 00:13:56.870 --rc geninfo_all_blocks=1 00:13:56.870 --rc geninfo_unexecuted_blocks=1 00:13:56.870 00:13:56.870 ' 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:56.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:56.870 21:06:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:05.017 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:05.017 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:05.017 Found net devices under 0000:31:00.0: cvl_0_0 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:05.017 Found net devices under 0000:31:00.1: cvl_0_1 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.017 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:05.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:14:05.018 00:14:05.018 --- 10.0.0.2 ping statistics --- 00:14:05.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.018 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:05.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:14:05.018 00:14:05.018 --- 10.0.0.1 ping statistics --- 00:14:05.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.018 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:05.018 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:05.280 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:05.280 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:05.280 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:05.280 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.280 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2008737 00:14:05.280 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2008737 00:14:05.280 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:05.280 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2008737 ']' 00:14:05.280 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.280 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.280 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.280 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.280 21:07:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.280 [2024-12-05 21:07:06.525903] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:14:05.280 [2024-12-05 21:07:06.525973] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.280 [2024-12-05 21:07:06.617099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.280 [2024-12-05 21:07:06.659463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.280 [2024-12-05 21:07:06.659501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.280 [2024-12-05 21:07:06.659509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.280 [2024-12-05 21:07:06.659516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.280 [2024-12-05 21:07:06.659523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.280 [2024-12-05 21:07:06.661407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.280 [2024-12-05 21:07:06.661541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.280 [2024-12-05 21:07:06.661700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.280 [2024-12-05 21:07:06.661701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.224 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.224 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:06.224 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:06.224 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:06.224 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.224 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.224 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:06.224 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.224 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.224 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.224 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:06.224 "tick_rate": 2400000000, 00:14:06.224 "poll_groups": [ 00:14:06.224 { 00:14:06.224 "name": "nvmf_tgt_poll_group_000", 00:14:06.224 "admin_qpairs": 0, 00:14:06.224 "io_qpairs": 0, 00:14:06.224 "current_admin_qpairs": 0, 00:14:06.224 "current_io_qpairs": 0, 00:14:06.224 "pending_bdev_io": 0, 00:14:06.224 "completed_nvme_io": 0, 00:14:06.224 "transports": [] 00:14:06.224 }, 00:14:06.224 { 00:14:06.224 "name": "nvmf_tgt_poll_group_001", 00:14:06.224 "admin_qpairs": 0, 00:14:06.224 "io_qpairs": 0, 00:14:06.224 "current_admin_qpairs": 0, 00:14:06.224 "current_io_qpairs": 0, 00:14:06.224 "pending_bdev_io": 0, 00:14:06.224 "completed_nvme_io": 0, 00:14:06.224 "transports": [] 00:14:06.225 }, 00:14:06.225 { 00:14:06.225 "name": "nvmf_tgt_poll_group_002", 00:14:06.225 "admin_qpairs": 0, 00:14:06.225 "io_qpairs": 0, 00:14:06.225 "current_admin_qpairs": 0, 00:14:06.225 "current_io_qpairs": 0, 00:14:06.225 "pending_bdev_io": 0, 00:14:06.225 "completed_nvme_io": 0, 00:14:06.225 "transports": [] 00:14:06.225 }, 00:14:06.225 { 00:14:06.225 "name": "nvmf_tgt_poll_group_003", 00:14:06.225 "admin_qpairs": 0, 00:14:06.225 "io_qpairs": 0, 00:14:06.225 "current_admin_qpairs": 0, 00:14:06.225 "current_io_qpairs": 0, 00:14:06.225 "pending_bdev_io": 0, 00:14:06.225 "completed_nvme_io": 0, 00:14:06.225 "transports": [] 00:14:06.225 } 00:14:06.225 ] 00:14:06.225 }' 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.225 [2024-12-05 21:07:07.498039] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:06.225 "tick_rate": 2400000000, 00:14:06.225 "poll_groups": [ 00:14:06.225 { 00:14:06.225 "name": "nvmf_tgt_poll_group_000", 00:14:06.225 "admin_qpairs": 0, 00:14:06.225 "io_qpairs": 0, 00:14:06.225 "current_admin_qpairs": 0, 00:14:06.225 "current_io_qpairs": 0, 00:14:06.225 "pending_bdev_io": 0, 00:14:06.225 "completed_nvme_io": 0, 00:14:06.225 "transports": [ 00:14:06.225 { 00:14:06.225 "trtype": "TCP" 00:14:06.225 } 00:14:06.225 ] 00:14:06.225 }, 00:14:06.225 { 00:14:06.225 "name": "nvmf_tgt_poll_group_001", 00:14:06.225 "admin_qpairs": 0, 00:14:06.225 "io_qpairs": 0, 00:14:06.225 "current_admin_qpairs": 0, 00:14:06.225 "current_io_qpairs": 0, 00:14:06.225 "pending_bdev_io": 0, 00:14:06.225 "completed_nvme_io": 0, 00:14:06.225 "transports": [ 00:14:06.225 { 00:14:06.225 "trtype": "TCP" 00:14:06.225 } 00:14:06.225 ] 00:14:06.225 }, 00:14:06.225 { 00:14:06.225 "name": "nvmf_tgt_poll_group_002", 00:14:06.225 "admin_qpairs": 0, 00:14:06.225 "io_qpairs": 0, 00:14:06.225 "current_admin_qpairs": 0, 00:14:06.225 "current_io_qpairs": 0, 00:14:06.225 "pending_bdev_io": 0, 00:14:06.225 "completed_nvme_io": 0, 00:14:06.225 "transports": [ 00:14:06.225 { 00:14:06.225 "trtype": "TCP" 00:14:06.225 } 00:14:06.225 ] 00:14:06.225 }, 00:14:06.225 { 00:14:06.225 "name": "nvmf_tgt_poll_group_003", 00:14:06.225 "admin_qpairs": 0, 00:14:06.225 "io_qpairs": 0, 00:14:06.225 "current_admin_qpairs": 0, 00:14:06.225 "current_io_qpairs": 0, 00:14:06.225 "pending_bdev_io": 0, 00:14:06.225 "completed_nvme_io": 0, 00:14:06.225 "transports": [ 00:14:06.225 { 00:14:06.225 "trtype": "TCP" 00:14:06.225 } 00:14:06.225 ] 00:14:06.225 } 00:14:06.225 ] 00:14:06.225 }' 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.225 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.486 Malloc1 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.486 [2024-12-05 21:07:07.711723] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:14:06.486 [2024-12-05 21:07:07.748484] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:14:06.486 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:06.486 could not add new controller: failed to write to nvme-fabrics device 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.486 21:07:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:08.397 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:08.397 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:08.397 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.397 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:08.397 21:07:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:10.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.314 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.315 [2024-12-05 21:07:11.565596] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:14:10.315 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:10.315 could not add new controller: failed to write to nvme-fabrics device 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.315 21:07:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:11.769 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:11.769 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:11.769 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:11.769 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:11.769 21:07:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:13.688 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:13.688 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:13.688 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.688 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:13.688 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.688 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:13.688 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:13.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.948 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:13.948 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:13.948 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:13.948 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.948 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:13.948 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.948 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:13.948 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.948 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.948 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.948 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.948 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:13.948 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.949 [2024-12-05 21:07:15.302814] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.949 21:07:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:15.861 21:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:15.861 21:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:15.861 21:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:15.861 21:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:15.861 21:07:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.774 21:07:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.774 [2024-12-05 21:07:19.020271] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.774 21:07:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:19.686 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:19.686 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:19.686 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.686 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:19.686 21:07:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:21.599 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.600 [2024-12-05 21:07:22.789283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.600 21:07:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:22.986 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:22.986 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:22.986 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:22.986 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:22.986 21:07:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:24.897 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:24.897 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:24.897 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:24.897 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:24.897 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:24.897 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:24.897 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.158 [2024-12-05 21:07:26.512507] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.158 21:07:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:27.068 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:27.068 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:27.068 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.069 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:27.069 21:07:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.980 [2024-12-05 21:07:30.234577] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.980 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.981 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.981 21:07:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:30.893 21:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:30.893 21:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:30.893 21:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:30.893 21:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:30.893 21:07:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:32.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.808 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:32.809 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:32.809 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:32.809 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 [2024-12-05 21:07:34.011565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 [2024-12-05 21:07:34.079715] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 [2024-12-05 21:07:34.147949] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.809 [2024-12-05 21:07:34.216198] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.809 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.810 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:32.810 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.810 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.810 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.810 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:32.810 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.810 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:32.810 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.810 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.810 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.810 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.071 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.072 [2024-12-05 21:07:34.280395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:33.072 "tick_rate": 2400000000, 00:14:33.072 "poll_groups": [ 00:14:33.072 { 00:14:33.072 "name": "nvmf_tgt_poll_group_000", 00:14:33.072 "admin_qpairs": 0, 00:14:33.072 "io_qpairs": 224, 00:14:33.072 "current_admin_qpairs": 0, 00:14:33.072 "current_io_qpairs": 0, 00:14:33.072 "pending_bdev_io": 0, 00:14:33.072 "completed_nvme_io": 273, 00:14:33.072 "transports": [ 00:14:33.072 { 00:14:33.072 "trtype": "TCP" 00:14:33.072 } 00:14:33.072 ] 00:14:33.072 }, 00:14:33.072 { 00:14:33.072 "name": "nvmf_tgt_poll_group_001", 00:14:33.072 "admin_qpairs": 1, 00:14:33.072 "io_qpairs": 223, 00:14:33.072 "current_admin_qpairs": 0, 00:14:33.072 "current_io_qpairs": 0, 00:14:33.072 "pending_bdev_io": 0, 00:14:33.072 "completed_nvme_io": 452, 00:14:33.072 "transports": [ 00:14:33.072 { 00:14:33.072 "trtype": "TCP" 00:14:33.072 } 00:14:33.072 ] 00:14:33.072 }, 00:14:33.072 { 00:14:33.072 "name": "nvmf_tgt_poll_group_002", 00:14:33.072 "admin_qpairs": 6, 00:14:33.072 "io_qpairs": 218, 00:14:33.072 "current_admin_qpairs": 0, 00:14:33.072 "current_io_qpairs": 0, 00:14:33.072 "pending_bdev_io": 0, 00:14:33.072 "completed_nvme_io": 219, 00:14:33.072 "transports": [ 00:14:33.072 { 00:14:33.072 "trtype": "TCP" 00:14:33.072 } 00:14:33.072 ] 00:14:33.072 }, 00:14:33.072 { 00:14:33.072 "name": "nvmf_tgt_poll_group_003", 00:14:33.072 "admin_qpairs": 0, 00:14:33.072 "io_qpairs": 224, 00:14:33.072 "current_admin_qpairs": 0, 00:14:33.072 "current_io_qpairs": 0, 00:14:33.072 "pending_bdev_io": 0, 00:14:33.072 "completed_nvme_io": 295, 00:14:33.072 "transports": [ 00:14:33.072 { 00:14:33.072 "trtype": "TCP" 00:14:33.072 } 00:14:33.072 ] 00:14:33.072 } 00:14:33.072 ] 00:14:33.072 }' 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:33.072 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:33.072 rmmod nvme_tcp 00:14:33.072 rmmod nvme_fabrics 00:14:33.072 rmmod nvme_keyring 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2008737 ']' 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2008737 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2008737 ']' 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2008737 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2008737 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2008737' 00:14:33.333 killing process with pid 2008737 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2008737 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2008737 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:33.333 21:07:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.878 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:35.878 00:14:35.878 real 0m38.907s 00:14:35.878 user 1m54.374s 00:14:35.878 sys 0m8.522s 00:14:35.878 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.878 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.878 ************************************ 00:14:35.878 END TEST nvmf_rpc 00:14:35.878 ************************************ 00:14:35.878 21:07:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:35.878 21:07:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:35.878 21:07:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.878 21:07:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:35.878 ************************************ 00:14:35.878 START TEST nvmf_invalid 00:14:35.878 ************************************ 00:14:35.878 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:35.878 * Looking for test storage... 00:14:35.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.878 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:35.878 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:14:35.878 21:07:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:35.878 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:35.878 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:35.878 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:35.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.879 --rc genhtml_branch_coverage=1 00:14:35.879 --rc genhtml_function_coverage=1 00:14:35.879 --rc genhtml_legend=1 00:14:35.879 --rc geninfo_all_blocks=1 00:14:35.879 --rc geninfo_unexecuted_blocks=1 00:14:35.879 00:14:35.879 ' 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:35.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.879 --rc genhtml_branch_coverage=1 00:14:35.879 --rc genhtml_function_coverage=1 00:14:35.879 --rc genhtml_legend=1 00:14:35.879 --rc geninfo_all_blocks=1 00:14:35.879 --rc geninfo_unexecuted_blocks=1 00:14:35.879 00:14:35.879 ' 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:35.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.879 --rc genhtml_branch_coverage=1 00:14:35.879 --rc genhtml_function_coverage=1 00:14:35.879 --rc genhtml_legend=1 00:14:35.879 --rc geninfo_all_blocks=1 00:14:35.879 --rc geninfo_unexecuted_blocks=1 00:14:35.879 00:14:35.879 ' 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:35.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.879 --rc genhtml_branch_coverage=1 00:14:35.879 --rc genhtml_function_coverage=1 00:14:35.879 --rc genhtml_legend=1 00:14:35.879 --rc geninfo_all_blocks=1 00:14:35.879 --rc geninfo_unexecuted_blocks=1 00:14:35.879 00:14:35.879 ' 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:35.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:35.879 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:35.880 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.880 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.880 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.880 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:35.880 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:35.880 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:35.880 21:07:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:44.015 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:44.015 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:44.015 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:44.015 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:44.015 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:44.015 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:44.015 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:44.015 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:44.015 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:44.015 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:44.015 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:44.015 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:44.015 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:44.015 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:44.016 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:44.016 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:44.016 Found net devices under 0000:31:00.0: cvl_0_0 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:44.016 Found net devices under 0000:31:00.1: cvl_0_1 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:44.016 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:44.275 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:44.275 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:44.275 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:44.275 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:44.275 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:44.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:14:44.276 00:14:44.276 --- 10.0.0.2 ping statistics --- 00:14:44.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.276 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:44.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:14:44.276 00:14:44.276 --- 10.0.0.1 ping statistics --- 00:14:44.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.276 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2019065 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2019065 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2019065 ']' 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.276 21:07:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:44.276 [2024-12-05 21:07:45.705385] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:14:44.276 [2024-12-05 21:07:45.705437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.535 [2024-12-05 21:07:45.788668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:44.535 [2024-12-05 21:07:45.824538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.535 [2024-12-05 21:07:45.824568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.535 [2024-12-05 21:07:45.824576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.535 [2024-12-05 21:07:45.824583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.535 [2024-12-05 21:07:45.824589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.535 [2024-12-05 21:07:45.826133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.535 [2024-12-05 21:07:45.826252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.535 [2024-12-05 21:07:45.826411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.535 [2024-12-05 21:07:45.826411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.104 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.104 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:45.104 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:45.104 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:45.104 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:45.364 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.364 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:45.364 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27617 00:14:45.364 [2024-12-05 21:07:46.714073] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:45.364 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:45.364 { 00:14:45.364 "nqn": "nqn.2016-06.io.spdk:cnode27617", 00:14:45.364 "tgt_name": "foobar", 00:14:45.364 "method": "nvmf_create_subsystem", 00:14:45.364 "req_id": 1 00:14:45.364 } 00:14:45.364 Got JSON-RPC error response 00:14:45.364 response: 00:14:45.364 { 00:14:45.364 "code": -32603, 00:14:45.364 "message": "Unable to find target foobar" 00:14:45.364 }' 00:14:45.364 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:45.364 { 00:14:45.364 "nqn": "nqn.2016-06.io.spdk:cnode27617", 00:14:45.364 "tgt_name": "foobar", 00:14:45.364 "method": "nvmf_create_subsystem", 00:14:45.364 "req_id": 1 00:14:45.364 } 00:14:45.364 Got JSON-RPC error response 00:14:45.364 response: 00:14:45.364 { 00:14:45.364 "code": -32603, 00:14:45.364 "message": "Unable to find target foobar" 00:14:45.364 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:45.364 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:45.364 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode31831 00:14:45.624 [2024-12-05 21:07:46.902734] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31831: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:45.624 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:45.624 { 00:14:45.624 "nqn": "nqn.2016-06.io.spdk:cnode31831", 00:14:45.624 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:45.624 "method": "nvmf_create_subsystem", 00:14:45.624 "req_id": 1 00:14:45.624 } 00:14:45.624 Got JSON-RPC error response 00:14:45.624 response: 00:14:45.624 { 00:14:45.624 "code": -32602, 00:14:45.624 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:45.624 }' 00:14:45.624 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:45.624 { 00:14:45.624 "nqn": "nqn.2016-06.io.spdk:cnode31831", 00:14:45.624 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:45.624 "method": "nvmf_create_subsystem", 00:14:45.624 "req_id": 1 00:14:45.624 } 00:14:45.624 Got JSON-RPC error response 00:14:45.624 response: 00:14:45.624 { 00:14:45.624 "code": -32602, 00:14:45.624 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:45.624 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:45.624 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:45.624 21:07:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18311 00:14:45.886 [2024-12-05 21:07:47.091282] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18311: invalid model number 'SPDK_Controller' 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:45.886 { 00:14:45.886 "nqn": "nqn.2016-06.io.spdk:cnode18311", 00:14:45.886 "model_number": "SPDK_Controller\u001f", 00:14:45.886 "method": "nvmf_create_subsystem", 00:14:45.886 "req_id": 1 00:14:45.886 } 00:14:45.886 Got JSON-RPC error response 00:14:45.886 response: 00:14:45.886 { 00:14:45.886 "code": -32602, 00:14:45.886 "message": "Invalid MN SPDK_Controller\u001f" 00:14:45.886 }' 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:45.886 { 00:14:45.886 "nqn": "nqn.2016-06.io.spdk:cnode18311", 00:14:45.886 "model_number": "SPDK_Controller\u001f", 00:14:45.886 "method": "nvmf_create_subsystem", 00:14:45.886 "req_id": 1 00:14:45.886 } 00:14:45.886 Got JSON-RPC error response 00:14:45.886 response: 00:14:45.886 { 00:14:45.886 "code": -32602, 00:14:45.886 "message": "Invalid MN SPDK_Controller\u001f" 00:14:45.886 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:45.886 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '^Qw`'\'':T!nKo40tHCVk@_!' 00:14:45.887 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '^Qw`'\'':T!nKo40tHCVk@_!' nqn.2016-06.io.spdk:cnode6383 00:14:46.149 [2024-12-05 21:07:47.444399] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6383: invalid serial number '^Qw`':T!nKo40tHCVk@_!' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:46.149 { 00:14:46.149 "nqn": "nqn.2016-06.io.spdk:cnode6383", 00:14:46.149 "serial_number": "^Qw`'\'':T!nKo40tHCVk@_!", 00:14:46.149 "method": "nvmf_create_subsystem", 00:14:46.149 "req_id": 1 00:14:46.149 } 00:14:46.149 Got JSON-RPC error response 00:14:46.149 response: 00:14:46.149 { 00:14:46.149 "code": -32602, 00:14:46.149 "message": "Invalid SN ^Qw`'\'':T!nKo40tHCVk@_!" 00:14:46.149 }' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:46.149 { 00:14:46.149 "nqn": "nqn.2016-06.io.spdk:cnode6383", 00:14:46.149 "serial_number": "^Qw`':T!nKo40tHCVk@_!", 00:14:46.149 "method": "nvmf_create_subsystem", 00:14:46.149 "req_id": 1 00:14:46.149 } 00:14:46.149 Got JSON-RPC error response 00:14:46.149 response: 00:14:46.149 { 00:14:46.149 "code": -32602, 00:14:46.149 "message": "Invalid SN ^Qw`':T!nKo40tHCVk@_!" 00:14:46.149 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.149 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:46.411 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ } == \- ]] 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '}nRTO!:OXA[xj71Kii)`o@A[_M;1gE#:2hLP(JE4=' 00:14:46.412 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '}nRTO!:OXA[xj71Kii)`o@A[_M;1gE#:2hLP(JE4=' nqn.2016-06.io.spdk:cnode24690 00:14:46.672 [2024-12-05 21:07:47.954063] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24690: invalid model number '}nRTO!:OXA[xj71Kii)`o@A[_M;1gE#:2hLP(JE4=' 00:14:46.672 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:46.672 { 00:14:46.672 "nqn": "nqn.2016-06.io.spdk:cnode24690", 00:14:46.672 "model_number": "}nRTO!:OXA[xj71Kii)`o@A[_M;1gE#:2hLP(JE4=", 00:14:46.672 "method": "nvmf_create_subsystem", 00:14:46.672 "req_id": 1 00:14:46.672 } 00:14:46.672 Got JSON-RPC error response 00:14:46.672 response: 00:14:46.672 { 00:14:46.672 "code": -32602, 00:14:46.672 "message": "Invalid MN }nRTO!:OXA[xj71Kii)`o@A[_M;1gE#:2hLP(JE4=" 00:14:46.672 }' 00:14:46.672 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:46.672 { 00:14:46.672 "nqn": "nqn.2016-06.io.spdk:cnode24690", 00:14:46.672 "model_number": "}nRTO!:OXA[xj71Kii)`o@A[_M;1gE#:2hLP(JE4=", 00:14:46.672 "method": "nvmf_create_subsystem", 00:14:46.672 "req_id": 1 00:14:46.672 } 00:14:46.672 Got JSON-RPC error response 00:14:46.672 response: 00:14:46.672 { 00:14:46.672 "code": -32602, 00:14:46.672 "message": "Invalid MN }nRTO!:OXA[xj71Kii)`o@A[_M;1gE#:2hLP(JE4=" 00:14:46.672 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:46.672 21:07:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:46.932 [2024-12-05 21:07:48.138747] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.932 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:46.932 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:46.932 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:46.932 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:46.932 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:47.192 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:47.192 [2024-12-05 21:07:48.523966] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:47.192 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:47.192 { 00:14:47.192 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:47.192 "listen_address": { 00:14:47.192 "trtype": "tcp", 00:14:47.192 "traddr": "", 00:14:47.192 "trsvcid": "4421" 00:14:47.192 }, 00:14:47.192 "method": "nvmf_subsystem_remove_listener", 00:14:47.192 "req_id": 1 00:14:47.192 } 00:14:47.192 Got JSON-RPC error response 00:14:47.192 response: 00:14:47.192 { 00:14:47.192 "code": -32602, 00:14:47.192 "message": "Invalid parameters" 00:14:47.192 }' 00:14:47.192 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:47.192 { 00:14:47.192 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:47.192 "listen_address": { 00:14:47.192 "trtype": "tcp", 00:14:47.192 "traddr": "", 00:14:47.192 "trsvcid": "4421" 00:14:47.192 }, 00:14:47.192 "method": "nvmf_subsystem_remove_listener", 00:14:47.192 "req_id": 1 00:14:47.192 } 00:14:47.192 Got JSON-RPC error response 00:14:47.192 response: 00:14:47.192 { 00:14:47.192 "code": -32602, 00:14:47.192 "message": "Invalid parameters" 00:14:47.192 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:47.192 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16896 -i 0 00:14:47.452 [2024-12-05 21:07:48.712526] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16896: invalid cntlid range [0-65519] 00:14:47.452 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:47.452 { 00:14:47.452 "nqn": "nqn.2016-06.io.spdk:cnode16896", 00:14:47.452 "min_cntlid": 0, 00:14:47.452 "method": "nvmf_create_subsystem", 00:14:47.452 "req_id": 1 00:14:47.452 } 00:14:47.452 Got JSON-RPC error response 00:14:47.452 response: 00:14:47.452 { 00:14:47.452 "code": -32602, 00:14:47.452 "message": "Invalid cntlid range [0-65519]" 00:14:47.452 }' 00:14:47.452 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:47.452 { 00:14:47.452 "nqn": "nqn.2016-06.io.spdk:cnode16896", 00:14:47.452 "min_cntlid": 0, 00:14:47.452 "method": "nvmf_create_subsystem", 00:14:47.452 "req_id": 1 00:14:47.452 } 00:14:47.452 Got JSON-RPC error response 00:14:47.452 response: 00:14:47.452 { 00:14:47.452 "code": -32602, 00:14:47.452 "message": "Invalid cntlid range [0-65519]" 00:14:47.452 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:47.452 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28459 -i 65520 00:14:47.711 [2024-12-05 21:07:48.893132] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28459: invalid cntlid range [65520-65519] 00:14:47.711 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:47.711 { 00:14:47.711 "nqn": "nqn.2016-06.io.spdk:cnode28459", 00:14:47.711 "min_cntlid": 65520, 00:14:47.711 "method": "nvmf_create_subsystem", 00:14:47.711 "req_id": 1 00:14:47.711 } 00:14:47.711 Got JSON-RPC error response 00:14:47.711 response: 00:14:47.711 { 00:14:47.711 "code": -32602, 00:14:47.711 "message": "Invalid cntlid range [65520-65519]" 00:14:47.711 }' 00:14:47.711 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:47.711 { 00:14:47.711 "nqn": "nqn.2016-06.io.spdk:cnode28459", 00:14:47.711 "min_cntlid": 65520, 00:14:47.711 "method": "nvmf_create_subsystem", 00:14:47.711 "req_id": 1 00:14:47.711 } 00:14:47.711 Got JSON-RPC error response 00:14:47.711 response: 00:14:47.711 { 00:14:47.711 "code": -32602, 00:14:47.711 "message": "Invalid cntlid range [65520-65519]" 00:14:47.711 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:47.711 21:07:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28984 -I 0 00:14:47.711 [2024-12-05 21:07:49.073675] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28984: invalid cntlid range [1-0] 00:14:47.711 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:47.711 { 00:14:47.711 "nqn": "nqn.2016-06.io.spdk:cnode28984", 00:14:47.711 "max_cntlid": 0, 00:14:47.711 "method": "nvmf_create_subsystem", 00:14:47.711 "req_id": 1 00:14:47.711 } 00:14:47.711 Got JSON-RPC error response 00:14:47.711 response: 00:14:47.711 { 00:14:47.711 "code": -32602, 00:14:47.711 "message": "Invalid cntlid range [1-0]" 00:14:47.711 }' 00:14:47.711 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:47.711 { 00:14:47.711 "nqn": "nqn.2016-06.io.spdk:cnode28984", 00:14:47.711 "max_cntlid": 0, 00:14:47.711 "method": "nvmf_create_subsystem", 00:14:47.711 "req_id": 1 00:14:47.711 } 00:14:47.711 Got JSON-RPC error response 00:14:47.711 response: 00:14:47.711 { 00:14:47.711 "code": -32602, 00:14:47.711 "message": "Invalid cntlid range [1-0]" 00:14:47.711 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:47.711 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8126 -I 65520 00:14:47.971 [2024-12-05 21:07:49.262266] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8126: invalid cntlid range [1-65520] 00:14:47.971 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:47.971 { 00:14:47.971 "nqn": "nqn.2016-06.io.spdk:cnode8126", 00:14:47.972 "max_cntlid": 65520, 00:14:47.972 "method": "nvmf_create_subsystem", 00:14:47.972 "req_id": 1 00:14:47.972 } 00:14:47.972 Got JSON-RPC error response 00:14:47.972 response: 00:14:47.972 { 00:14:47.972 "code": -32602, 00:14:47.972 "message": "Invalid cntlid range [1-65520]" 00:14:47.972 }' 00:14:47.972 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:47.972 { 00:14:47.972 "nqn": "nqn.2016-06.io.spdk:cnode8126", 00:14:47.972 "max_cntlid": 65520, 00:14:47.972 "method": "nvmf_create_subsystem", 00:14:47.972 "req_id": 1 00:14:47.972 } 00:14:47.972 Got JSON-RPC error response 00:14:47.972 response: 00:14:47.972 { 00:14:47.972 "code": -32602, 00:14:47.972 "message": "Invalid cntlid range [1-65520]" 00:14:47.972 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:47.972 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2211 -i 6 -I 5 00:14:48.231 [2024-12-05 21:07:49.442859] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2211: invalid cntlid range [6-5] 00:14:48.231 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:48.231 { 00:14:48.231 "nqn": "nqn.2016-06.io.spdk:cnode2211", 00:14:48.231 "min_cntlid": 6, 00:14:48.231 "max_cntlid": 5, 00:14:48.231 "method": "nvmf_create_subsystem", 00:14:48.231 "req_id": 1 00:14:48.231 } 00:14:48.231 Got JSON-RPC error response 00:14:48.231 response: 00:14:48.231 { 00:14:48.231 "code": -32602, 00:14:48.231 "message": "Invalid cntlid range [6-5]" 00:14:48.231 }' 00:14:48.231 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:48.231 { 00:14:48.231 "nqn": "nqn.2016-06.io.spdk:cnode2211", 00:14:48.231 "min_cntlid": 6, 00:14:48.231 "max_cntlid": 5, 00:14:48.231 "method": "nvmf_create_subsystem", 00:14:48.231 "req_id": 1 00:14:48.231 } 00:14:48.231 Got JSON-RPC error response 00:14:48.231 response: 00:14:48.231 { 00:14:48.231 "code": -32602, 00:14:48.231 "message": "Invalid cntlid range [6-5]" 00:14:48.231 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:48.231 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:48.231 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:48.231 { 00:14:48.231 "name": "foobar", 00:14:48.231 "method": "nvmf_delete_target", 00:14:48.231 "req_id": 1 00:14:48.231 } 00:14:48.231 Got JSON-RPC error response 00:14:48.231 response: 00:14:48.231 { 00:14:48.231 "code": -32602, 00:14:48.231 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:48.231 }' 00:14:48.231 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:48.231 { 00:14:48.231 "name": "foobar", 00:14:48.231 "method": "nvmf_delete_target", 00:14:48.231 "req_id": 1 00:14:48.231 } 00:14:48.231 Got JSON-RPC error response 00:14:48.231 response: 00:14:48.231 { 00:14:48.231 "code": -32602, 00:14:48.231 "message": "The specified target doesn't exist, cannot delete it." 00:14:48.232 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:48.232 rmmod nvme_tcp 00:14:48.232 rmmod nvme_fabrics 00:14:48.232 rmmod nvme_keyring 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2019065 ']' 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2019065 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2019065 ']' 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2019065 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.232 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2019065 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2019065' 00:14:48.492 killing process with pid 2019065 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2019065 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2019065 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.492 21:07:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.033 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:51.033 00:14:51.033 real 0m15.019s 00:14:51.033 user 0m20.754s 00:14:51.033 sys 0m7.391s 00:14:51.033 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.033 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:51.033 ************************************ 00:14:51.033 END TEST nvmf_invalid 00:14:51.033 ************************************ 00:14:51.033 21:07:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:51.033 21:07:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:51.033 21:07:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.033 21:07:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:51.033 ************************************ 00:14:51.033 START TEST nvmf_connect_stress 00:14:51.033 ************************************ 00:14:51.033 21:07:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:51.033 * Looking for test storage... 00:14:51.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.033 --rc genhtml_branch_coverage=1 00:14:51.033 --rc genhtml_function_coverage=1 00:14:51.033 --rc genhtml_legend=1 00:14:51.033 --rc geninfo_all_blocks=1 00:14:51.033 --rc geninfo_unexecuted_blocks=1 00:14:51.033 00:14:51.033 ' 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.033 --rc genhtml_branch_coverage=1 00:14:51.033 --rc genhtml_function_coverage=1 00:14:51.033 --rc genhtml_legend=1 00:14:51.033 --rc geninfo_all_blocks=1 00:14:51.033 --rc geninfo_unexecuted_blocks=1 00:14:51.033 00:14:51.033 ' 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.033 --rc genhtml_branch_coverage=1 00:14:51.033 --rc genhtml_function_coverage=1 00:14:51.033 --rc genhtml_legend=1 00:14:51.033 --rc geninfo_all_blocks=1 00:14:51.033 --rc geninfo_unexecuted_blocks=1 00:14:51.033 00:14:51.033 ' 00:14:51.033 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.033 --rc genhtml_branch_coverage=1 00:14:51.033 --rc genhtml_function_coverage=1 00:14:51.033 --rc genhtml_legend=1 00:14:51.033 --rc geninfo_all_blocks=1 00:14:51.033 --rc geninfo_unexecuted_blocks=1 00:14:51.033 00:14:51.033 ' 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:51.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:51.034 21:07:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:59.171 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:59.171 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:59.171 Found net devices under 0000:31:00.0: cvl_0_0 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:59.171 Found net devices under 0000:31:00.1: cvl_0_1 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:59.171 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:59.172 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:59.172 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:59.172 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:59.172 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:59.172 21:07:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:59.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:14:59.172 00:14:59.172 --- 10.0.0.2 ping statistics --- 00:14:59.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.172 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:59.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:14:59.172 00:14:59.172 --- 10.0.0.1 ping statistics --- 00:14:59.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.172 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2024648 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2024648 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2024648 ']' 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.172 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:59.172 [2024-12-05 21:08:00.186676] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:14:59.172 [2024-12-05 21:08:00.186756] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.172 [2024-12-05 21:08:00.296267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:59.172 [2024-12-05 21:08:00.348300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.172 [2024-12-05 21:08:00.348352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.172 [2024-12-05 21:08:00.348361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.172 [2024-12-05 21:08:00.348368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.172 [2024-12-05 21:08:00.348375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.172 [2024-12-05 21:08:00.350229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.172 [2024-12-05 21:08:00.350394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.172 [2024-12-05 21:08:00.350394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.742 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.742 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:59.742 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:59.742 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:59.742 21:08:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.742 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.742 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:59.742 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.742 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.742 [2024-12-05 21:08:01.031099] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.742 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.742 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:59.742 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.742 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.742 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.742 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.742 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.742 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.742 [2024-12-05 21:08:01.055427] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.742 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.743 NULL1 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2024950 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:59.743 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:00.003 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:00.003 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.003 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.003 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.262 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.262 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:00.262 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.262 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.262 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.522 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.522 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:00.522 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.522 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.522 21:08:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.783 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.783 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:00.783 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:00.783 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.783 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.354 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.354 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:01.354 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.354 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.354 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.614 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.614 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:01.614 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.615 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.615 21:08:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:01.875 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.875 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:01.875 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:01.875 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.875 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.135 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.135 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:02.135 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.135 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.135 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.397 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.397 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:02.397 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.397 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.397 21:08:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.969 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.969 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:02.969 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.969 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.969 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.229 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.229 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:03.229 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.229 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.229 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.489 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.489 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:03.489 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.489 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.489 21:08:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.750 21:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.750 21:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:03.750 21:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.750 21:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.750 21:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.011 21:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.011 21:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:04.011 21:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.011 21:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.011 21:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.582 21:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.582 21:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:04.582 21:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.582 21:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.582 21:08:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.841 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.841 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:04.841 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.841 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.841 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.101 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.102 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:05.102 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.102 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.102 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.361 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.361 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:05.361 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.361 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.361 21:08:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.621 21:08:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.621 21:08:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:05.621 21:08:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.621 21:08:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.621 21:08:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.191 21:08:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.191 21:08:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:06.191 21:08:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.191 21:08:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.191 21:08:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.450 21:08:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.450 21:08:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:06.450 21:08:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.450 21:08:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.450 21:08:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.712 21:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.712 21:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:06.712 21:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.712 21:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.712 21:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.978 21:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.978 21:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:06.978 21:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.978 21:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.978 21:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.337 21:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.337 21:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:07.337 21:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.337 21:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.337 21:08:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.636 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.636 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:07.636 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.636 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.636 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.206 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.206 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:08.206 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.206 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.206 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.467 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.467 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:08.467 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.467 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.467 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.727 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.727 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:08.727 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.727 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.727 21:08:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.988 21:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.988 21:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:08.988 21:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.988 21:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.988 21:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.249 21:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.249 21:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:09.249 21:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.249 21:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.249 21:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.820 21:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.820 21:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:09.820 21:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.820 21:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.820 21:08:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.820 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2024950 00:15:10.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2024950) - No such process 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2024950 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:10.081 rmmod nvme_tcp 00:15:10.081 rmmod nvme_fabrics 00:15:10.081 rmmod nvme_keyring 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2024648 ']' 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2024648 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2024648 ']' 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2024648 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2024648 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2024648' 00:15:10.081 killing process with pid 2024648 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2024648 00:15:10.081 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2024648 00:15:10.341 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:10.341 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:10.341 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:10.341 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:10.341 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:10.342 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:15:10.342 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:15:10.342 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:10.342 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:10.342 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.342 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.342 21:08:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.254 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:12.254 00:15:12.254 real 0m21.620s 00:15:12.254 user 0m42.294s 00:15:12.254 sys 0m9.530s 00:15:12.254 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.254 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.254 ************************************ 00:15:12.254 END TEST nvmf_connect_stress 00:15:12.254 ************************************ 00:15:12.254 21:08:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:12.254 21:08:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:12.254 21:08:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.254 21:08:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:12.254 ************************************ 00:15:12.254 START TEST nvmf_fused_ordering 00:15:12.254 ************************************ 00:15:12.254 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:12.516 * Looking for test storage... 00:15:12.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:12.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.516 --rc genhtml_branch_coverage=1 00:15:12.516 --rc genhtml_function_coverage=1 00:15:12.516 --rc genhtml_legend=1 00:15:12.516 --rc geninfo_all_blocks=1 00:15:12.516 --rc geninfo_unexecuted_blocks=1 00:15:12.516 00:15:12.516 ' 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:12.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.516 --rc genhtml_branch_coverage=1 00:15:12.516 --rc genhtml_function_coverage=1 00:15:12.516 --rc genhtml_legend=1 00:15:12.516 --rc geninfo_all_blocks=1 00:15:12.516 --rc geninfo_unexecuted_blocks=1 00:15:12.516 00:15:12.516 ' 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:12.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.516 --rc genhtml_branch_coverage=1 00:15:12.516 --rc genhtml_function_coverage=1 00:15:12.516 --rc genhtml_legend=1 00:15:12.516 --rc geninfo_all_blocks=1 00:15:12.516 --rc geninfo_unexecuted_blocks=1 00:15:12.516 00:15:12.516 ' 00:15:12.516 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:12.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.516 --rc genhtml_branch_coverage=1 00:15:12.516 --rc genhtml_function_coverage=1 00:15:12.516 --rc genhtml_legend=1 00:15:12.516 --rc geninfo_all_blocks=1 00:15:12.516 --rc geninfo_unexecuted_blocks=1 00:15:12.516 00:15:12.516 ' 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:12.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:12.517 21:08:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:20.655 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:20.655 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:20.655 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:20.656 Found net devices under 0000:31:00.0: cvl_0_0 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:20.656 Found net devices under 0000:31:00.1: cvl_0_1 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:20.656 21:08:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:20.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:15:20.916 00:15:20.916 --- 10.0.0.2 ping statistics --- 00:15:20.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.916 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:20.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:15:20.916 00:15:20.916 --- 10.0.0.1 ping statistics --- 00:15:20.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.916 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:20.916 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:20.917 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:20.917 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2031663 00:15:20.917 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2031663 00:15:20.917 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:20.917 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2031663 ']' 00:15:20.917 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.917 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.917 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.917 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.917 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.176 [2024-12-05 21:08:22.366478] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:15:21.176 [2024-12-05 21:08:22.366527] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.176 [2024-12-05 21:08:22.470085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.176 [2024-12-05 21:08:22.504613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.176 [2024-12-05 21:08:22.504645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.176 [2024-12-05 21:08:22.504654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.176 [2024-12-05 21:08:22.504660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.176 [2024-12-05 21:08:22.504666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.176 [2024-12-05 21:08:22.505234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.176 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.176 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:15:21.176 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:21.176 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:21.176 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.436 [2024-12-05 21:08:22.629646] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.436 [2024-12-05 21:08:22.645890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.436 NULL1 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.436 21:08:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:21.436 [2024-12-05 21:08:22.701685] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:15:21.436 [2024-12-05 21:08:22.701732] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2031714 ] 00:15:22.007 Attached to nqn.2016-06.io.spdk:cnode1 00:15:22.007 Namespace ID: 1 size: 1GB 00:15:22.007 fused_ordering(0) 00:15:22.007 fused_ordering(1) 00:15:22.007 fused_ordering(2) 00:15:22.007 fused_ordering(3) 00:15:22.007 fused_ordering(4) 00:15:22.007 fused_ordering(5) 00:15:22.007 fused_ordering(6) 00:15:22.007 fused_ordering(7) 00:15:22.007 fused_ordering(8) 00:15:22.007 fused_ordering(9) 00:15:22.007 fused_ordering(10) 00:15:22.007 fused_ordering(11) 00:15:22.007 fused_ordering(12) 00:15:22.007 fused_ordering(13) 00:15:22.007 fused_ordering(14) 00:15:22.007 fused_ordering(15) 00:15:22.007 fused_ordering(16) 00:15:22.007 fused_ordering(17) 00:15:22.007 fused_ordering(18) 00:15:22.007 fused_ordering(19) 00:15:22.007 fused_ordering(20) 00:15:22.007 fused_ordering(21) 00:15:22.007 fused_ordering(22) 00:15:22.007 fused_ordering(23) 00:15:22.007 fused_ordering(24) 00:15:22.007 fused_ordering(25) 00:15:22.007 fused_ordering(26) 00:15:22.007 fused_ordering(27) 00:15:22.007 fused_ordering(28) 00:15:22.007 fused_ordering(29) 00:15:22.007 fused_ordering(30) 00:15:22.007 fused_ordering(31) 00:15:22.007 fused_ordering(32) 00:15:22.007 fused_ordering(33) 00:15:22.007 fused_ordering(34) 00:15:22.007 fused_ordering(35) 00:15:22.007 fused_ordering(36) 00:15:22.007 fused_ordering(37) 00:15:22.007 fused_ordering(38) 00:15:22.007 fused_ordering(39) 00:15:22.007 fused_ordering(40) 00:15:22.007 fused_ordering(41) 00:15:22.007 fused_ordering(42) 00:15:22.007 fused_ordering(43) 00:15:22.007 fused_ordering(44) 00:15:22.007 fused_ordering(45) 00:15:22.007 fused_ordering(46) 00:15:22.007 fused_ordering(47) 00:15:22.007 fused_ordering(48) 00:15:22.007 fused_ordering(49) 00:15:22.007 fused_ordering(50) 00:15:22.007 fused_ordering(51) 00:15:22.007 fused_ordering(52) 00:15:22.007 fused_ordering(53) 00:15:22.007 fused_ordering(54) 00:15:22.007 fused_ordering(55) 00:15:22.007 fused_ordering(56) 00:15:22.007 fused_ordering(57) 00:15:22.007 fused_ordering(58) 00:15:22.007 fused_ordering(59) 00:15:22.007 fused_ordering(60) 00:15:22.007 fused_ordering(61) 00:15:22.007 fused_ordering(62) 00:15:22.007 fused_ordering(63) 00:15:22.007 fused_ordering(64) 00:15:22.007 fused_ordering(65) 00:15:22.007 fused_ordering(66) 00:15:22.007 fused_ordering(67) 00:15:22.007 fused_ordering(68) 00:15:22.007 fused_ordering(69) 00:15:22.008 fused_ordering(70) 00:15:22.008 fused_ordering(71) 00:15:22.008 fused_ordering(72) 00:15:22.008 fused_ordering(73) 00:15:22.008 fused_ordering(74) 00:15:22.008 fused_ordering(75) 00:15:22.008 fused_ordering(76) 00:15:22.008 fused_ordering(77) 00:15:22.008 fused_ordering(78) 00:15:22.008 fused_ordering(79) 00:15:22.008 fused_ordering(80) 00:15:22.008 fused_ordering(81) 00:15:22.008 fused_ordering(82) 00:15:22.008 fused_ordering(83) 00:15:22.008 fused_ordering(84) 00:15:22.008 fused_ordering(85) 00:15:22.008 fused_ordering(86) 00:15:22.008 fused_ordering(87) 00:15:22.008 fused_ordering(88) 00:15:22.008 fused_ordering(89) 00:15:22.008 fused_ordering(90) 00:15:22.008 fused_ordering(91) 00:15:22.008 fused_ordering(92) 00:15:22.008 fused_ordering(93) 00:15:22.008 fused_ordering(94) 00:15:22.008 fused_ordering(95) 00:15:22.008 fused_ordering(96) 00:15:22.008 fused_ordering(97) 00:15:22.008 fused_ordering(98) 00:15:22.008 fused_ordering(99) 00:15:22.008 fused_ordering(100) 00:15:22.008 fused_ordering(101) 00:15:22.008 fused_ordering(102) 00:15:22.008 fused_ordering(103) 00:15:22.008 fused_ordering(104) 00:15:22.008 fused_ordering(105) 00:15:22.008 fused_ordering(106) 00:15:22.008 fused_ordering(107) 00:15:22.008 fused_ordering(108) 00:15:22.008 fused_ordering(109) 00:15:22.008 fused_ordering(110) 00:15:22.008 fused_ordering(111) 00:15:22.008 fused_ordering(112) 00:15:22.008 fused_ordering(113) 00:15:22.008 fused_ordering(114) 00:15:22.008 fused_ordering(115) 00:15:22.008 fused_ordering(116) 00:15:22.008 fused_ordering(117) 00:15:22.008 fused_ordering(118) 00:15:22.008 fused_ordering(119) 00:15:22.008 fused_ordering(120) 00:15:22.008 fused_ordering(121) 00:15:22.008 fused_ordering(122) 00:15:22.008 fused_ordering(123) 00:15:22.008 fused_ordering(124) 00:15:22.008 fused_ordering(125) 00:15:22.008 fused_ordering(126) 00:15:22.008 fused_ordering(127) 00:15:22.008 fused_ordering(128) 00:15:22.008 fused_ordering(129) 00:15:22.008 fused_ordering(130) 00:15:22.008 fused_ordering(131) 00:15:22.008 fused_ordering(132) 00:15:22.008 fused_ordering(133) 00:15:22.008 fused_ordering(134) 00:15:22.008 fused_ordering(135) 00:15:22.008 fused_ordering(136) 00:15:22.008 fused_ordering(137) 00:15:22.008 fused_ordering(138) 00:15:22.008 fused_ordering(139) 00:15:22.008 fused_ordering(140) 00:15:22.008 fused_ordering(141) 00:15:22.008 fused_ordering(142) 00:15:22.008 fused_ordering(143) 00:15:22.008 fused_ordering(144) 00:15:22.008 fused_ordering(145) 00:15:22.008 fused_ordering(146) 00:15:22.008 fused_ordering(147) 00:15:22.008 fused_ordering(148) 00:15:22.008 fused_ordering(149) 00:15:22.008 fused_ordering(150) 00:15:22.008 fused_ordering(151) 00:15:22.008 fused_ordering(152) 00:15:22.008 fused_ordering(153) 00:15:22.008 fused_ordering(154) 00:15:22.008 fused_ordering(155) 00:15:22.008 fused_ordering(156) 00:15:22.008 fused_ordering(157) 00:15:22.008 fused_ordering(158) 00:15:22.008 fused_ordering(159) 00:15:22.008 fused_ordering(160) 00:15:22.008 fused_ordering(161) 00:15:22.008 fused_ordering(162) 00:15:22.008 fused_ordering(163) 00:15:22.008 fused_ordering(164) 00:15:22.008 fused_ordering(165) 00:15:22.008 fused_ordering(166) 00:15:22.008 fused_ordering(167) 00:15:22.008 fused_ordering(168) 00:15:22.008 fused_ordering(169) 00:15:22.008 fused_ordering(170) 00:15:22.008 fused_ordering(171) 00:15:22.008 fused_ordering(172) 00:15:22.008 fused_ordering(173) 00:15:22.008 fused_ordering(174) 00:15:22.008 fused_ordering(175) 00:15:22.008 fused_ordering(176) 00:15:22.008 fused_ordering(177) 00:15:22.008 fused_ordering(178) 00:15:22.008 fused_ordering(179) 00:15:22.008 fused_ordering(180) 00:15:22.008 fused_ordering(181) 00:15:22.008 fused_ordering(182) 00:15:22.008 fused_ordering(183) 00:15:22.008 fused_ordering(184) 00:15:22.008 fused_ordering(185) 00:15:22.008 fused_ordering(186) 00:15:22.008 fused_ordering(187) 00:15:22.008 fused_ordering(188) 00:15:22.008 fused_ordering(189) 00:15:22.008 fused_ordering(190) 00:15:22.008 fused_ordering(191) 00:15:22.008 fused_ordering(192) 00:15:22.008 fused_ordering(193) 00:15:22.008 fused_ordering(194) 00:15:22.008 fused_ordering(195) 00:15:22.008 fused_ordering(196) 00:15:22.008 fused_ordering(197) 00:15:22.008 fused_ordering(198) 00:15:22.008 fused_ordering(199) 00:15:22.008 fused_ordering(200) 00:15:22.008 fused_ordering(201) 00:15:22.008 fused_ordering(202) 00:15:22.008 fused_ordering(203) 00:15:22.008 fused_ordering(204) 00:15:22.008 fused_ordering(205) 00:15:22.269 fused_ordering(206) 00:15:22.269 fused_ordering(207) 00:15:22.269 fused_ordering(208) 00:15:22.269 fused_ordering(209) 00:15:22.269 fused_ordering(210) 00:15:22.269 fused_ordering(211) 00:15:22.269 fused_ordering(212) 00:15:22.269 fused_ordering(213) 00:15:22.269 fused_ordering(214) 00:15:22.269 fused_ordering(215) 00:15:22.269 fused_ordering(216) 00:15:22.269 fused_ordering(217) 00:15:22.269 fused_ordering(218) 00:15:22.269 fused_ordering(219) 00:15:22.269 fused_ordering(220) 00:15:22.269 fused_ordering(221) 00:15:22.269 fused_ordering(222) 00:15:22.269 fused_ordering(223) 00:15:22.269 fused_ordering(224) 00:15:22.269 fused_ordering(225) 00:15:22.269 fused_ordering(226) 00:15:22.269 fused_ordering(227) 00:15:22.269 fused_ordering(228) 00:15:22.269 fused_ordering(229) 00:15:22.269 fused_ordering(230) 00:15:22.269 fused_ordering(231) 00:15:22.269 fused_ordering(232) 00:15:22.269 fused_ordering(233) 00:15:22.269 fused_ordering(234) 00:15:22.269 fused_ordering(235) 00:15:22.269 fused_ordering(236) 00:15:22.269 fused_ordering(237) 00:15:22.269 fused_ordering(238) 00:15:22.269 fused_ordering(239) 00:15:22.269 fused_ordering(240) 00:15:22.269 fused_ordering(241) 00:15:22.269 fused_ordering(242) 00:15:22.269 fused_ordering(243) 00:15:22.269 fused_ordering(244) 00:15:22.269 fused_ordering(245) 00:15:22.269 fused_ordering(246) 00:15:22.269 fused_ordering(247) 00:15:22.269 fused_ordering(248) 00:15:22.269 fused_ordering(249) 00:15:22.269 fused_ordering(250) 00:15:22.269 fused_ordering(251) 00:15:22.269 fused_ordering(252) 00:15:22.269 fused_ordering(253) 00:15:22.269 fused_ordering(254) 00:15:22.269 fused_ordering(255) 00:15:22.269 fused_ordering(256) 00:15:22.269 fused_ordering(257) 00:15:22.269 fused_ordering(258) 00:15:22.269 fused_ordering(259) 00:15:22.269 fused_ordering(260) 00:15:22.269 fused_ordering(261) 00:15:22.269 fused_ordering(262) 00:15:22.269 fused_ordering(263) 00:15:22.269 fused_ordering(264) 00:15:22.269 fused_ordering(265) 00:15:22.269 fused_ordering(266) 00:15:22.269 fused_ordering(267) 00:15:22.269 fused_ordering(268) 00:15:22.269 fused_ordering(269) 00:15:22.269 fused_ordering(270) 00:15:22.269 fused_ordering(271) 00:15:22.269 fused_ordering(272) 00:15:22.269 fused_ordering(273) 00:15:22.269 fused_ordering(274) 00:15:22.269 fused_ordering(275) 00:15:22.269 fused_ordering(276) 00:15:22.269 fused_ordering(277) 00:15:22.269 fused_ordering(278) 00:15:22.269 fused_ordering(279) 00:15:22.269 fused_ordering(280) 00:15:22.269 fused_ordering(281) 00:15:22.269 fused_ordering(282) 00:15:22.269 fused_ordering(283) 00:15:22.269 fused_ordering(284) 00:15:22.269 fused_ordering(285) 00:15:22.269 fused_ordering(286) 00:15:22.269 fused_ordering(287) 00:15:22.269 fused_ordering(288) 00:15:22.269 fused_ordering(289) 00:15:22.269 fused_ordering(290) 00:15:22.269 fused_ordering(291) 00:15:22.269 fused_ordering(292) 00:15:22.269 fused_ordering(293) 00:15:22.269 fused_ordering(294) 00:15:22.269 fused_ordering(295) 00:15:22.269 fused_ordering(296) 00:15:22.269 fused_ordering(297) 00:15:22.269 fused_ordering(298) 00:15:22.269 fused_ordering(299) 00:15:22.269 fused_ordering(300) 00:15:22.269 fused_ordering(301) 00:15:22.269 fused_ordering(302) 00:15:22.269 fused_ordering(303) 00:15:22.269 fused_ordering(304) 00:15:22.270 fused_ordering(305) 00:15:22.270 fused_ordering(306) 00:15:22.270 fused_ordering(307) 00:15:22.270 fused_ordering(308) 00:15:22.270 fused_ordering(309) 00:15:22.270 fused_ordering(310) 00:15:22.270 fused_ordering(311) 00:15:22.270 fused_ordering(312) 00:15:22.270 fused_ordering(313) 00:15:22.270 fused_ordering(314) 00:15:22.270 fused_ordering(315) 00:15:22.270 fused_ordering(316) 00:15:22.270 fused_ordering(317) 00:15:22.270 fused_ordering(318) 00:15:22.270 fused_ordering(319) 00:15:22.270 fused_ordering(320) 00:15:22.270 fused_ordering(321) 00:15:22.270 fused_ordering(322) 00:15:22.270 fused_ordering(323) 00:15:22.270 fused_ordering(324) 00:15:22.270 fused_ordering(325) 00:15:22.270 fused_ordering(326) 00:15:22.270 fused_ordering(327) 00:15:22.270 fused_ordering(328) 00:15:22.270 fused_ordering(329) 00:15:22.270 fused_ordering(330) 00:15:22.270 fused_ordering(331) 00:15:22.270 fused_ordering(332) 00:15:22.270 fused_ordering(333) 00:15:22.270 fused_ordering(334) 00:15:22.270 fused_ordering(335) 00:15:22.270 fused_ordering(336) 00:15:22.270 fused_ordering(337) 00:15:22.270 fused_ordering(338) 00:15:22.270 fused_ordering(339) 00:15:22.270 fused_ordering(340) 00:15:22.270 fused_ordering(341) 00:15:22.270 fused_ordering(342) 00:15:22.270 fused_ordering(343) 00:15:22.270 fused_ordering(344) 00:15:22.270 fused_ordering(345) 00:15:22.270 fused_ordering(346) 00:15:22.270 fused_ordering(347) 00:15:22.270 fused_ordering(348) 00:15:22.270 fused_ordering(349) 00:15:22.270 fused_ordering(350) 00:15:22.270 fused_ordering(351) 00:15:22.270 fused_ordering(352) 00:15:22.270 fused_ordering(353) 00:15:22.270 fused_ordering(354) 00:15:22.270 fused_ordering(355) 00:15:22.270 fused_ordering(356) 00:15:22.270 fused_ordering(357) 00:15:22.270 fused_ordering(358) 00:15:22.270 fused_ordering(359) 00:15:22.270 fused_ordering(360) 00:15:22.270 fused_ordering(361) 00:15:22.270 fused_ordering(362) 00:15:22.270 fused_ordering(363) 00:15:22.270 fused_ordering(364) 00:15:22.270 fused_ordering(365) 00:15:22.270 fused_ordering(366) 00:15:22.270 fused_ordering(367) 00:15:22.270 fused_ordering(368) 00:15:22.270 fused_ordering(369) 00:15:22.270 fused_ordering(370) 00:15:22.270 fused_ordering(371) 00:15:22.270 fused_ordering(372) 00:15:22.270 fused_ordering(373) 00:15:22.270 fused_ordering(374) 00:15:22.270 fused_ordering(375) 00:15:22.270 fused_ordering(376) 00:15:22.270 fused_ordering(377) 00:15:22.270 fused_ordering(378) 00:15:22.270 fused_ordering(379) 00:15:22.270 fused_ordering(380) 00:15:22.270 fused_ordering(381) 00:15:22.270 fused_ordering(382) 00:15:22.270 fused_ordering(383) 00:15:22.270 fused_ordering(384) 00:15:22.270 fused_ordering(385) 00:15:22.270 fused_ordering(386) 00:15:22.270 fused_ordering(387) 00:15:22.270 fused_ordering(388) 00:15:22.270 fused_ordering(389) 00:15:22.270 fused_ordering(390) 00:15:22.270 fused_ordering(391) 00:15:22.270 fused_ordering(392) 00:15:22.270 fused_ordering(393) 00:15:22.270 fused_ordering(394) 00:15:22.270 fused_ordering(395) 00:15:22.270 fused_ordering(396) 00:15:22.270 fused_ordering(397) 00:15:22.270 fused_ordering(398) 00:15:22.270 fused_ordering(399) 00:15:22.270 fused_ordering(400) 00:15:22.270 fused_ordering(401) 00:15:22.270 fused_ordering(402) 00:15:22.270 fused_ordering(403) 00:15:22.270 fused_ordering(404) 00:15:22.270 fused_ordering(405) 00:15:22.270 fused_ordering(406) 00:15:22.270 fused_ordering(407) 00:15:22.270 fused_ordering(408) 00:15:22.270 fused_ordering(409) 00:15:22.270 fused_ordering(410) 00:15:22.530 fused_ordering(411) 00:15:22.530 fused_ordering(412) 00:15:22.530 fused_ordering(413) 00:15:22.530 fused_ordering(414) 00:15:22.530 fused_ordering(415) 00:15:22.530 fused_ordering(416) 00:15:22.530 fused_ordering(417) 00:15:22.530 fused_ordering(418) 00:15:22.530 fused_ordering(419) 00:15:22.530 fused_ordering(420) 00:15:22.530 fused_ordering(421) 00:15:22.530 fused_ordering(422) 00:15:22.530 fused_ordering(423) 00:15:22.530 fused_ordering(424) 00:15:22.530 fused_ordering(425) 00:15:22.530 fused_ordering(426) 00:15:22.530 fused_ordering(427) 00:15:22.530 fused_ordering(428) 00:15:22.530 fused_ordering(429) 00:15:22.530 fused_ordering(430) 00:15:22.530 fused_ordering(431) 00:15:22.530 fused_ordering(432) 00:15:22.530 fused_ordering(433) 00:15:22.530 fused_ordering(434) 00:15:22.530 fused_ordering(435) 00:15:22.530 fused_ordering(436) 00:15:22.530 fused_ordering(437) 00:15:22.530 fused_ordering(438) 00:15:22.530 fused_ordering(439) 00:15:22.530 fused_ordering(440) 00:15:22.530 fused_ordering(441) 00:15:22.530 fused_ordering(442) 00:15:22.530 fused_ordering(443) 00:15:22.530 fused_ordering(444) 00:15:22.530 fused_ordering(445) 00:15:22.530 fused_ordering(446) 00:15:22.530 fused_ordering(447) 00:15:22.530 fused_ordering(448) 00:15:22.530 fused_ordering(449) 00:15:22.530 fused_ordering(450) 00:15:22.530 fused_ordering(451) 00:15:22.530 fused_ordering(452) 00:15:22.530 fused_ordering(453) 00:15:22.530 fused_ordering(454) 00:15:22.530 fused_ordering(455) 00:15:22.530 fused_ordering(456) 00:15:22.530 fused_ordering(457) 00:15:22.530 fused_ordering(458) 00:15:22.530 fused_ordering(459) 00:15:22.530 fused_ordering(460) 00:15:22.530 fused_ordering(461) 00:15:22.530 fused_ordering(462) 00:15:22.530 fused_ordering(463) 00:15:22.530 fused_ordering(464) 00:15:22.530 fused_ordering(465) 00:15:22.530 fused_ordering(466) 00:15:22.530 fused_ordering(467) 00:15:22.530 fused_ordering(468) 00:15:22.530 fused_ordering(469) 00:15:22.530 fused_ordering(470) 00:15:22.530 fused_ordering(471) 00:15:22.530 fused_ordering(472) 00:15:22.530 fused_ordering(473) 00:15:22.530 fused_ordering(474) 00:15:22.530 fused_ordering(475) 00:15:22.530 fused_ordering(476) 00:15:22.530 fused_ordering(477) 00:15:22.530 fused_ordering(478) 00:15:22.530 fused_ordering(479) 00:15:22.530 fused_ordering(480) 00:15:22.530 fused_ordering(481) 00:15:22.530 fused_ordering(482) 00:15:22.530 fused_ordering(483) 00:15:22.530 fused_ordering(484) 00:15:22.530 fused_ordering(485) 00:15:22.530 fused_ordering(486) 00:15:22.530 fused_ordering(487) 00:15:22.530 fused_ordering(488) 00:15:22.530 fused_ordering(489) 00:15:22.530 fused_ordering(490) 00:15:22.530 fused_ordering(491) 00:15:22.530 fused_ordering(492) 00:15:22.530 fused_ordering(493) 00:15:22.530 fused_ordering(494) 00:15:22.530 fused_ordering(495) 00:15:22.530 fused_ordering(496) 00:15:22.530 fused_ordering(497) 00:15:22.530 fused_ordering(498) 00:15:22.530 fused_ordering(499) 00:15:22.530 fused_ordering(500) 00:15:22.530 fused_ordering(501) 00:15:22.530 fused_ordering(502) 00:15:22.530 fused_ordering(503) 00:15:22.531 fused_ordering(504) 00:15:22.531 fused_ordering(505) 00:15:22.531 fused_ordering(506) 00:15:22.531 fused_ordering(507) 00:15:22.531 fused_ordering(508) 00:15:22.531 fused_ordering(509) 00:15:22.531 fused_ordering(510) 00:15:22.531 fused_ordering(511) 00:15:22.531 fused_ordering(512) 00:15:22.531 fused_ordering(513) 00:15:22.531 fused_ordering(514) 00:15:22.531 fused_ordering(515) 00:15:22.531 fused_ordering(516) 00:15:22.531 fused_ordering(517) 00:15:22.531 fused_ordering(518) 00:15:22.531 fused_ordering(519) 00:15:22.531 fused_ordering(520) 00:15:22.531 fused_ordering(521) 00:15:22.531 fused_ordering(522) 00:15:22.531 fused_ordering(523) 00:15:22.531 fused_ordering(524) 00:15:22.531 fused_ordering(525) 00:15:22.531 fused_ordering(526) 00:15:22.531 fused_ordering(527) 00:15:22.531 fused_ordering(528) 00:15:22.531 fused_ordering(529) 00:15:22.531 fused_ordering(530) 00:15:22.531 fused_ordering(531) 00:15:22.531 fused_ordering(532) 00:15:22.531 fused_ordering(533) 00:15:22.531 fused_ordering(534) 00:15:22.531 fused_ordering(535) 00:15:22.531 fused_ordering(536) 00:15:22.531 fused_ordering(537) 00:15:22.531 fused_ordering(538) 00:15:22.531 fused_ordering(539) 00:15:22.531 fused_ordering(540) 00:15:22.531 fused_ordering(541) 00:15:22.531 fused_ordering(542) 00:15:22.531 fused_ordering(543) 00:15:22.531 fused_ordering(544) 00:15:22.531 fused_ordering(545) 00:15:22.531 fused_ordering(546) 00:15:22.531 fused_ordering(547) 00:15:22.531 fused_ordering(548) 00:15:22.531 fused_ordering(549) 00:15:22.531 fused_ordering(550) 00:15:22.531 fused_ordering(551) 00:15:22.531 fused_ordering(552) 00:15:22.531 fused_ordering(553) 00:15:22.531 fused_ordering(554) 00:15:22.531 fused_ordering(555) 00:15:22.531 fused_ordering(556) 00:15:22.531 fused_ordering(557) 00:15:22.531 fused_ordering(558) 00:15:22.531 fused_ordering(559) 00:15:22.531 fused_ordering(560) 00:15:22.531 fused_ordering(561) 00:15:22.531 fused_ordering(562) 00:15:22.531 fused_ordering(563) 00:15:22.531 fused_ordering(564) 00:15:22.531 fused_ordering(565) 00:15:22.531 fused_ordering(566) 00:15:22.531 fused_ordering(567) 00:15:22.531 fused_ordering(568) 00:15:22.531 fused_ordering(569) 00:15:22.531 fused_ordering(570) 00:15:22.531 fused_ordering(571) 00:15:22.531 fused_ordering(572) 00:15:22.531 fused_ordering(573) 00:15:22.531 fused_ordering(574) 00:15:22.531 fused_ordering(575) 00:15:22.531 fused_ordering(576) 00:15:22.531 fused_ordering(577) 00:15:22.531 fused_ordering(578) 00:15:22.531 fused_ordering(579) 00:15:22.531 fused_ordering(580) 00:15:22.531 fused_ordering(581) 00:15:22.531 fused_ordering(582) 00:15:22.531 fused_ordering(583) 00:15:22.531 fused_ordering(584) 00:15:22.531 fused_ordering(585) 00:15:22.531 fused_ordering(586) 00:15:22.531 fused_ordering(587) 00:15:22.531 fused_ordering(588) 00:15:22.531 fused_ordering(589) 00:15:22.531 fused_ordering(590) 00:15:22.531 fused_ordering(591) 00:15:22.531 fused_ordering(592) 00:15:22.531 fused_ordering(593) 00:15:22.531 fused_ordering(594) 00:15:22.531 fused_ordering(595) 00:15:22.531 fused_ordering(596) 00:15:22.531 fused_ordering(597) 00:15:22.531 fused_ordering(598) 00:15:22.531 fused_ordering(599) 00:15:22.531 fused_ordering(600) 00:15:22.531 fused_ordering(601) 00:15:22.531 fused_ordering(602) 00:15:22.531 fused_ordering(603) 00:15:22.531 fused_ordering(604) 00:15:22.531 fused_ordering(605) 00:15:22.531 fused_ordering(606) 00:15:22.531 fused_ordering(607) 00:15:22.531 fused_ordering(608) 00:15:22.531 fused_ordering(609) 00:15:22.531 fused_ordering(610) 00:15:22.531 fused_ordering(611) 00:15:22.531 fused_ordering(612) 00:15:22.531 fused_ordering(613) 00:15:22.531 fused_ordering(614) 00:15:22.531 fused_ordering(615) 00:15:23.101 fused_ordering(616) 00:15:23.101 fused_ordering(617) 00:15:23.101 fused_ordering(618) 00:15:23.101 fused_ordering(619) 00:15:23.101 fused_ordering(620) 00:15:23.101 fused_ordering(621) 00:15:23.101 fused_ordering(622) 00:15:23.101 fused_ordering(623) 00:15:23.101 fused_ordering(624) 00:15:23.101 fused_ordering(625) 00:15:23.101 fused_ordering(626) 00:15:23.101 fused_ordering(627) 00:15:23.101 fused_ordering(628) 00:15:23.101 fused_ordering(629) 00:15:23.101 fused_ordering(630) 00:15:23.101 fused_ordering(631) 00:15:23.101 fused_ordering(632) 00:15:23.101 fused_ordering(633) 00:15:23.101 fused_ordering(634) 00:15:23.101 fused_ordering(635) 00:15:23.101 fused_ordering(636) 00:15:23.101 fused_ordering(637) 00:15:23.101 fused_ordering(638) 00:15:23.101 fused_ordering(639) 00:15:23.101 fused_ordering(640) 00:15:23.101 fused_ordering(641) 00:15:23.101 fused_ordering(642) 00:15:23.101 fused_ordering(643) 00:15:23.101 fused_ordering(644) 00:15:23.101 fused_ordering(645) 00:15:23.101 fused_ordering(646) 00:15:23.101 fused_ordering(647) 00:15:23.101 fused_ordering(648) 00:15:23.101 fused_ordering(649) 00:15:23.101 fused_ordering(650) 00:15:23.101 fused_ordering(651) 00:15:23.101 fused_ordering(652) 00:15:23.101 fused_ordering(653) 00:15:23.101 fused_ordering(654) 00:15:23.101 fused_ordering(655) 00:15:23.101 fused_ordering(656) 00:15:23.101 fused_ordering(657) 00:15:23.101 fused_ordering(658) 00:15:23.101 fused_ordering(659) 00:15:23.101 fused_ordering(660) 00:15:23.101 fused_ordering(661) 00:15:23.101 fused_ordering(662) 00:15:23.101 fused_ordering(663) 00:15:23.101 fused_ordering(664) 00:15:23.101 fused_ordering(665) 00:15:23.101 fused_ordering(666) 00:15:23.101 fused_ordering(667) 00:15:23.101 fused_ordering(668) 00:15:23.101 fused_ordering(669) 00:15:23.101 fused_ordering(670) 00:15:23.101 fused_ordering(671) 00:15:23.101 fused_ordering(672) 00:15:23.101 fused_ordering(673) 00:15:23.101 fused_ordering(674) 00:15:23.101 fused_ordering(675) 00:15:23.101 fused_ordering(676) 00:15:23.101 fused_ordering(677) 00:15:23.101 fused_ordering(678) 00:15:23.101 fused_ordering(679) 00:15:23.101 fused_ordering(680) 00:15:23.101 fused_ordering(681) 00:15:23.101 fused_ordering(682) 00:15:23.101 fused_ordering(683) 00:15:23.101 fused_ordering(684) 00:15:23.101 fused_ordering(685) 00:15:23.101 fused_ordering(686) 00:15:23.101 fused_ordering(687) 00:15:23.101 fused_ordering(688) 00:15:23.101 fused_ordering(689) 00:15:23.101 fused_ordering(690) 00:15:23.101 fused_ordering(691) 00:15:23.101 fused_ordering(692) 00:15:23.101 fused_ordering(693) 00:15:23.101 fused_ordering(694) 00:15:23.101 fused_ordering(695) 00:15:23.101 fused_ordering(696) 00:15:23.101 fused_ordering(697) 00:15:23.101 fused_ordering(698) 00:15:23.101 fused_ordering(699) 00:15:23.101 fused_ordering(700) 00:15:23.101 fused_ordering(701) 00:15:23.101 fused_ordering(702) 00:15:23.101 fused_ordering(703) 00:15:23.101 fused_ordering(704) 00:15:23.101 fused_ordering(705) 00:15:23.101 fused_ordering(706) 00:15:23.101 fused_ordering(707) 00:15:23.101 fused_ordering(708) 00:15:23.101 fused_ordering(709) 00:15:23.101 fused_ordering(710) 00:15:23.101 fused_ordering(711) 00:15:23.101 fused_ordering(712) 00:15:23.101 fused_ordering(713) 00:15:23.101 fused_ordering(714) 00:15:23.101 fused_ordering(715) 00:15:23.101 fused_ordering(716) 00:15:23.101 fused_ordering(717) 00:15:23.101 fused_ordering(718) 00:15:23.101 fused_ordering(719) 00:15:23.101 fused_ordering(720) 00:15:23.101 fused_ordering(721) 00:15:23.101 fused_ordering(722) 00:15:23.101 fused_ordering(723) 00:15:23.101 fused_ordering(724) 00:15:23.101 fused_ordering(725) 00:15:23.101 fused_ordering(726) 00:15:23.101 fused_ordering(727) 00:15:23.101 fused_ordering(728) 00:15:23.101 fused_ordering(729) 00:15:23.101 fused_ordering(730) 00:15:23.101 fused_ordering(731) 00:15:23.101 fused_ordering(732) 00:15:23.101 fused_ordering(733) 00:15:23.101 fused_ordering(734) 00:15:23.101 fused_ordering(735) 00:15:23.101 fused_ordering(736) 00:15:23.101 fused_ordering(737) 00:15:23.101 fused_ordering(738) 00:15:23.101 fused_ordering(739) 00:15:23.101 fused_ordering(740) 00:15:23.101 fused_ordering(741) 00:15:23.101 fused_ordering(742) 00:15:23.101 fused_ordering(743) 00:15:23.101 fused_ordering(744) 00:15:23.101 fused_ordering(745) 00:15:23.101 fused_ordering(746) 00:15:23.101 fused_ordering(747) 00:15:23.101 fused_ordering(748) 00:15:23.101 fused_ordering(749) 00:15:23.101 fused_ordering(750) 00:15:23.101 fused_ordering(751) 00:15:23.101 fused_ordering(752) 00:15:23.101 fused_ordering(753) 00:15:23.101 fused_ordering(754) 00:15:23.101 fused_ordering(755) 00:15:23.101 fused_ordering(756) 00:15:23.101 fused_ordering(757) 00:15:23.101 fused_ordering(758) 00:15:23.101 fused_ordering(759) 00:15:23.101 fused_ordering(760) 00:15:23.101 fused_ordering(761) 00:15:23.101 fused_ordering(762) 00:15:23.101 fused_ordering(763) 00:15:23.101 fused_ordering(764) 00:15:23.101 fused_ordering(765) 00:15:23.101 fused_ordering(766) 00:15:23.101 fused_ordering(767) 00:15:23.101 fused_ordering(768) 00:15:23.101 fused_ordering(769) 00:15:23.101 fused_ordering(770) 00:15:23.101 fused_ordering(771) 00:15:23.101 fused_ordering(772) 00:15:23.101 fused_ordering(773) 00:15:23.101 fused_ordering(774) 00:15:23.101 fused_ordering(775) 00:15:23.101 fused_ordering(776) 00:15:23.101 fused_ordering(777) 00:15:23.101 fused_ordering(778) 00:15:23.101 fused_ordering(779) 00:15:23.101 fused_ordering(780) 00:15:23.101 fused_ordering(781) 00:15:23.101 fused_ordering(782) 00:15:23.101 fused_ordering(783) 00:15:23.101 fused_ordering(784) 00:15:23.101 fused_ordering(785) 00:15:23.101 fused_ordering(786) 00:15:23.101 fused_ordering(787) 00:15:23.101 fused_ordering(788) 00:15:23.101 fused_ordering(789) 00:15:23.101 fused_ordering(790) 00:15:23.101 fused_ordering(791) 00:15:23.101 fused_ordering(792) 00:15:23.101 fused_ordering(793) 00:15:23.101 fused_ordering(794) 00:15:23.101 fused_ordering(795) 00:15:23.101 fused_ordering(796) 00:15:23.101 fused_ordering(797) 00:15:23.101 fused_ordering(798) 00:15:23.101 fused_ordering(799) 00:15:23.101 fused_ordering(800) 00:15:23.101 fused_ordering(801) 00:15:23.101 fused_ordering(802) 00:15:23.101 fused_ordering(803) 00:15:23.101 fused_ordering(804) 00:15:23.101 fused_ordering(805) 00:15:23.101 fused_ordering(806) 00:15:23.101 fused_ordering(807) 00:15:23.101 fused_ordering(808) 00:15:23.101 fused_ordering(809) 00:15:23.102 fused_ordering(810) 00:15:23.102 fused_ordering(811) 00:15:23.102 fused_ordering(812) 00:15:23.102 fused_ordering(813) 00:15:23.102 fused_ordering(814) 00:15:23.102 fused_ordering(815) 00:15:23.102 fused_ordering(816) 00:15:23.102 fused_ordering(817) 00:15:23.102 fused_ordering(818) 00:15:23.102 fused_ordering(819) 00:15:23.102 fused_ordering(820) 00:15:23.674 fused_ordering(821) 00:15:23.674 fused_ordering(822) 00:15:23.674 fused_ordering(823) 00:15:23.674 fused_ordering(824) 00:15:23.674 fused_ordering(825) 00:15:23.674 fused_ordering(826) 00:15:23.674 fused_ordering(827) 00:15:23.674 fused_ordering(828) 00:15:23.674 fused_ordering(829) 00:15:23.674 fused_ordering(830) 00:15:23.674 fused_ordering(831) 00:15:23.674 fused_ordering(832) 00:15:23.674 fused_ordering(833) 00:15:23.674 fused_ordering(834) 00:15:23.674 fused_ordering(835) 00:15:23.674 fused_ordering(836) 00:15:23.674 fused_ordering(837) 00:15:23.674 fused_ordering(838) 00:15:23.674 fused_ordering(839) 00:15:23.674 fused_ordering(840) 00:15:23.674 fused_ordering(841) 00:15:23.674 fused_ordering(842) 00:15:23.674 fused_ordering(843) 00:15:23.674 fused_ordering(844) 00:15:23.674 fused_ordering(845) 00:15:23.674 fused_ordering(846) 00:15:23.674 fused_ordering(847) 00:15:23.674 fused_ordering(848) 00:15:23.674 fused_ordering(849) 00:15:23.674 fused_ordering(850) 00:15:23.674 fused_ordering(851) 00:15:23.674 fused_ordering(852) 00:15:23.674 fused_ordering(853) 00:15:23.674 fused_ordering(854) 00:15:23.674 fused_ordering(855) 00:15:23.674 fused_ordering(856) 00:15:23.674 fused_ordering(857) 00:15:23.674 fused_ordering(858) 00:15:23.674 fused_ordering(859) 00:15:23.674 fused_ordering(860) 00:15:23.674 fused_ordering(861) 00:15:23.674 fused_ordering(862) 00:15:23.674 fused_ordering(863) 00:15:23.674 fused_ordering(864) 00:15:23.674 fused_ordering(865) 00:15:23.674 fused_ordering(866) 00:15:23.674 fused_ordering(867) 00:15:23.674 fused_ordering(868) 00:15:23.674 fused_ordering(869) 00:15:23.674 fused_ordering(870) 00:15:23.674 fused_ordering(871) 00:15:23.674 fused_ordering(872) 00:15:23.674 fused_ordering(873) 00:15:23.674 fused_ordering(874) 00:15:23.674 fused_ordering(875) 00:15:23.674 fused_ordering(876) 00:15:23.674 fused_ordering(877) 00:15:23.674 fused_ordering(878) 00:15:23.674 fused_ordering(879) 00:15:23.674 fused_ordering(880) 00:15:23.674 fused_ordering(881) 00:15:23.674 fused_ordering(882) 00:15:23.674 fused_ordering(883) 00:15:23.674 fused_ordering(884) 00:15:23.674 fused_ordering(885) 00:15:23.674 fused_ordering(886) 00:15:23.674 fused_ordering(887) 00:15:23.674 fused_ordering(888) 00:15:23.674 fused_ordering(889) 00:15:23.674 fused_ordering(890) 00:15:23.674 fused_ordering(891) 00:15:23.674 fused_ordering(892) 00:15:23.674 fused_ordering(893) 00:15:23.674 fused_ordering(894) 00:15:23.675 fused_ordering(895) 00:15:23.675 fused_ordering(896) 00:15:23.675 fused_ordering(897) 00:15:23.675 fused_ordering(898) 00:15:23.675 fused_ordering(899) 00:15:23.675 fused_ordering(900) 00:15:23.675 fused_ordering(901) 00:15:23.675 fused_ordering(902) 00:15:23.675 fused_ordering(903) 00:15:23.675 fused_ordering(904) 00:15:23.675 fused_ordering(905) 00:15:23.675 fused_ordering(906) 00:15:23.675 fused_ordering(907) 00:15:23.675 fused_ordering(908) 00:15:23.675 fused_ordering(909) 00:15:23.675 fused_ordering(910) 00:15:23.675 fused_ordering(911) 00:15:23.675 fused_ordering(912) 00:15:23.675 fused_ordering(913) 00:15:23.675 fused_ordering(914) 00:15:23.675 fused_ordering(915) 00:15:23.675 fused_ordering(916) 00:15:23.675 fused_ordering(917) 00:15:23.675 fused_ordering(918) 00:15:23.675 fused_ordering(919) 00:15:23.675 fused_ordering(920) 00:15:23.675 fused_ordering(921) 00:15:23.675 fused_ordering(922) 00:15:23.675 fused_ordering(923) 00:15:23.675 fused_ordering(924) 00:15:23.675 fused_ordering(925) 00:15:23.675 fused_ordering(926) 00:15:23.675 fused_ordering(927) 00:15:23.675 fused_ordering(928) 00:15:23.675 fused_ordering(929) 00:15:23.675 fused_ordering(930) 00:15:23.675 fused_ordering(931) 00:15:23.675 fused_ordering(932) 00:15:23.675 fused_ordering(933) 00:15:23.675 fused_ordering(934) 00:15:23.675 fused_ordering(935) 00:15:23.675 fused_ordering(936) 00:15:23.675 fused_ordering(937) 00:15:23.675 fused_ordering(938) 00:15:23.675 fused_ordering(939) 00:15:23.675 fused_ordering(940) 00:15:23.675 fused_ordering(941) 00:15:23.675 fused_ordering(942) 00:15:23.675 fused_ordering(943) 00:15:23.675 fused_ordering(944) 00:15:23.675 fused_ordering(945) 00:15:23.675 fused_ordering(946) 00:15:23.675 fused_ordering(947) 00:15:23.675 fused_ordering(948) 00:15:23.675 fused_ordering(949) 00:15:23.675 fused_ordering(950) 00:15:23.675 fused_ordering(951) 00:15:23.675 fused_ordering(952) 00:15:23.675 fused_ordering(953) 00:15:23.675 fused_ordering(954) 00:15:23.675 fused_ordering(955) 00:15:23.675 fused_ordering(956) 00:15:23.675 fused_ordering(957) 00:15:23.675 fused_ordering(958) 00:15:23.675 fused_ordering(959) 00:15:23.675 fused_ordering(960) 00:15:23.675 fused_ordering(961) 00:15:23.675 fused_ordering(962) 00:15:23.675 fused_ordering(963) 00:15:23.675 fused_ordering(964) 00:15:23.675 fused_ordering(965) 00:15:23.675 fused_ordering(966) 00:15:23.675 fused_ordering(967) 00:15:23.675 fused_ordering(968) 00:15:23.675 fused_ordering(969) 00:15:23.675 fused_ordering(970) 00:15:23.675 fused_ordering(971) 00:15:23.675 fused_ordering(972) 00:15:23.675 fused_ordering(973) 00:15:23.675 fused_ordering(974) 00:15:23.675 fused_ordering(975) 00:15:23.675 fused_ordering(976) 00:15:23.675 fused_ordering(977) 00:15:23.675 fused_ordering(978) 00:15:23.675 fused_ordering(979) 00:15:23.675 fused_ordering(980) 00:15:23.675 fused_ordering(981) 00:15:23.675 fused_ordering(982) 00:15:23.675 fused_ordering(983) 00:15:23.675 fused_ordering(984) 00:15:23.675 fused_ordering(985) 00:15:23.675 fused_ordering(986) 00:15:23.675 fused_ordering(987) 00:15:23.675 fused_ordering(988) 00:15:23.675 fused_ordering(989) 00:15:23.675 fused_ordering(990) 00:15:23.675 fused_ordering(991) 00:15:23.675 fused_ordering(992) 00:15:23.675 fused_ordering(993) 00:15:23.675 fused_ordering(994) 00:15:23.675 fused_ordering(995) 00:15:23.675 fused_ordering(996) 00:15:23.675 fused_ordering(997) 00:15:23.675 fused_ordering(998) 00:15:23.675 fused_ordering(999) 00:15:23.675 fused_ordering(1000) 00:15:23.675 fused_ordering(1001) 00:15:23.675 fused_ordering(1002) 00:15:23.675 fused_ordering(1003) 00:15:23.675 fused_ordering(1004) 00:15:23.675 fused_ordering(1005) 00:15:23.675 fused_ordering(1006) 00:15:23.675 fused_ordering(1007) 00:15:23.675 fused_ordering(1008) 00:15:23.675 fused_ordering(1009) 00:15:23.675 fused_ordering(1010) 00:15:23.675 fused_ordering(1011) 00:15:23.675 fused_ordering(1012) 00:15:23.675 fused_ordering(1013) 00:15:23.675 fused_ordering(1014) 00:15:23.675 fused_ordering(1015) 00:15:23.675 fused_ordering(1016) 00:15:23.675 fused_ordering(1017) 00:15:23.675 fused_ordering(1018) 00:15:23.675 fused_ordering(1019) 00:15:23.675 fused_ordering(1020) 00:15:23.675 fused_ordering(1021) 00:15:23.675 fused_ordering(1022) 00:15:23.675 fused_ordering(1023) 00:15:23.675 21:08:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:23.675 21:08:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:23.675 21:08:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:23.675 21:08:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:23.675 21:08:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:23.675 21:08:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:23.675 21:08:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:23.675 21:08:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:23.675 rmmod nvme_tcp 00:15:23.675 rmmod nvme_fabrics 00:15:23.675 rmmod nvme_keyring 00:15:23.675 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:23.675 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:23.675 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:23.675 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2031663 ']' 00:15:23.675 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2031663 00:15:23.675 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2031663 ']' 00:15:23.675 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2031663 00:15:23.675 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:23.675 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.675 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2031663 00:15:23.675 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:23.675 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:23.675 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2031663' 00:15:23.675 killing process with pid 2031663 00:15:23.675 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2031663 00:15:23.675 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2031663 00:15:23.935 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:23.935 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:23.935 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:23.935 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:23.935 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:23.935 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:23.935 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:23.935 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:23.935 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:23.935 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.935 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.935 21:08:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:26.475 00:15:26.475 real 0m13.613s 00:15:26.475 user 0m6.551s 00:15:26.475 sys 0m7.619s 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:26.475 ************************************ 00:15:26.475 END TEST nvmf_fused_ordering 00:15:26.475 ************************************ 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:26.475 ************************************ 00:15:26.475 START TEST nvmf_ns_masking 00:15:26.475 ************************************ 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:26.475 * Looking for test storage... 00:15:26.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:26.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.475 --rc genhtml_branch_coverage=1 00:15:26.475 --rc genhtml_function_coverage=1 00:15:26.475 --rc genhtml_legend=1 00:15:26.475 --rc geninfo_all_blocks=1 00:15:26.475 --rc geninfo_unexecuted_blocks=1 00:15:26.475 00:15:26.475 ' 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:26.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.475 --rc genhtml_branch_coverage=1 00:15:26.475 --rc genhtml_function_coverage=1 00:15:26.475 --rc genhtml_legend=1 00:15:26.475 --rc geninfo_all_blocks=1 00:15:26.475 --rc geninfo_unexecuted_blocks=1 00:15:26.475 00:15:26.475 ' 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:26.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.475 --rc genhtml_branch_coverage=1 00:15:26.475 --rc genhtml_function_coverage=1 00:15:26.475 --rc genhtml_legend=1 00:15:26.475 --rc geninfo_all_blocks=1 00:15:26.475 --rc geninfo_unexecuted_blocks=1 00:15:26.475 00:15:26.475 ' 00:15:26.475 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:26.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.475 --rc genhtml_branch_coverage=1 00:15:26.475 --rc genhtml_function_coverage=1 00:15:26.475 --rc genhtml_legend=1 00:15:26.476 --rc geninfo_all_blocks=1 00:15:26.476 --rc geninfo_unexecuted_blocks=1 00:15:26.476 00:15:26.476 ' 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:26.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2a4f4467-8715-480a-b312-9c63c576fa01 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=9621d772-3044-4819-853d-5c8b4de0ce1c 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=87e97d1d-9103-4c56-a1ef-daffd05e8879 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:26.476 21:08:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:34.612 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:34.612 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:34.612 Found net devices under 0000:31:00.0: cvl_0_0 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:34.612 Found net devices under 0000:31:00.1: cvl_0_1 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:34.612 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:34.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:15:34.613 00:15:34.613 --- 10.0.0.2 ping statistics --- 00:15:34.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.613 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:34.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:15:34.613 00:15:34.613 --- 10.0.0.1 ping statistics --- 00:15:34.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.613 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2037040 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2037040 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2037040 ']' 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.613 21:08:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:34.613 [2024-12-05 21:08:36.020195] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:15:34.613 [2024-12-05 21:08:36.020262] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.873 [2024-12-05 21:08:36.111265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.873 [2024-12-05 21:08:36.151253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.873 [2024-12-05 21:08:36.151287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.873 [2024-12-05 21:08:36.151295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.873 [2024-12-05 21:08:36.151302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.873 [2024-12-05 21:08:36.151308] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.873 [2024-12-05 21:08:36.151896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.443 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.443 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:35.443 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:35.443 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:35.443 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:35.443 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.443 21:08:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:35.703 [2024-12-05 21:08:36.998531] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.703 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:35.703 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:35.703 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:35.964 Malloc1 00:15:35.964 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:35.964 Malloc2 00:15:35.964 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:36.224 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:36.484 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.484 [2024-12-05 21:08:37.850420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.484 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:36.484 21:08:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 87e97d1d-9103-4c56-a1ef-daffd05e8879 -a 10.0.0.2 -s 4420 -i 4 00:15:36.745 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:36.745 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:36.745 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:36.745 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:36.745 21:08:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:38.659 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:38.659 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:38.659 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:38.919 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:38.919 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:38.919 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:38.919 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:38.919 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:38.920 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:38.920 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:38.920 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:38.920 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:38.920 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:38.920 [ 0]:0x1 00:15:38.920 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:38.920 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:38.920 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87404c909ea44dfba1cbea1c96296103 00:15:38.920 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87404c909ea44dfba1cbea1c96296103 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:38.920 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:39.180 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:39.181 [ 0]:0x1 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87404c909ea44dfba1cbea1c96296103 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87404c909ea44dfba1cbea1c96296103 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:39.181 [ 1]:0x2 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e4a0e39e4ad43349f46765ca2264add 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e4a0e39e4ad43349f46765ca2264add != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:39.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.181 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.442 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:39.702 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:39.702 21:08:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 87e97d1d-9103-4c56-a1ef-daffd05e8879 -a 10.0.0.2 -s 4420 -i 4 00:15:39.962 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:39.962 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:39.962 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:39.962 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:39.962 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:39.962 21:08:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:41.874 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:42.135 [ 0]:0x2 00:15:42.135 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:42.135 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:42.135 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e4a0e39e4ad43349f46765ca2264add 00:15:42.135 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e4a0e39e4ad43349f46765ca2264add != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:42.135 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:42.135 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:42.135 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:42.135 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:42.135 [ 0]:0x1 00:15:42.135 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:42.135 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:42.395 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87404c909ea44dfba1cbea1c96296103 00:15:42.395 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87404c909ea44dfba1cbea1c96296103 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:42.395 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:42.395 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:42.395 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:42.395 [ 1]:0x2 00:15:42.395 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:42.395 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:42.395 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e4a0e39e4ad43349f46765ca2264add 00:15:42.395 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e4a0e39e4ad43349f46765ca2264add != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:42.395 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:42.655 [ 0]:0x2 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e4a0e39e4ad43349f46765ca2264add 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e4a0e39e4ad43349f46765ca2264add != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:42.655 21:08:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:42.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.655 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:42.915 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:42.915 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 87e97d1d-9103-4c56-a1ef-daffd05e8879 -a 10.0.0.2 -s 4420 -i 4 00:15:43.175 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:43.175 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:43.175 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:43.175 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:43.175 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:43.175 21:08:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:45.080 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:45.080 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:45.080 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:45.080 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:45.080 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:45.080 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:45.080 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:45.080 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:45.080 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:45.080 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:45.080 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:45.080 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:45.080 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:45.080 [ 0]:0x1 00:15:45.080 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:45.080 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=87404c909ea44dfba1cbea1c96296103 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 87404c909ea44dfba1cbea1c96296103 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:45.340 [ 1]:0x2 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e4a0e39e4ad43349f46765ca2264add 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e4a0e39e4ad43349f46765ca2264add != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:45.340 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:45.600 [ 0]:0x2 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e4a0e39e4ad43349f46765ca2264add 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e4a0e39e4ad43349f46765ca2264add != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:45.600 21:08:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:45.859 [2024-12-05 21:08:47.085706] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:45.859 request: 00:15:45.859 { 00:15:45.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:45.859 "nsid": 2, 00:15:45.859 "host": "nqn.2016-06.io.spdk:host1", 00:15:45.859 "method": "nvmf_ns_remove_host", 00:15:45.859 "req_id": 1 00:15:45.859 } 00:15:45.859 Got JSON-RPC error response 00:15:45.859 response: 00:15:45.859 { 00:15:45.859 "code": -32602, 00:15:45.859 "message": "Invalid parameters" 00:15:45.859 } 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:45.859 [ 0]:0x2 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:45.859 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:45.860 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e4a0e39e4ad43349f46765ca2264add 00:15:45.860 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e4a0e39e4ad43349f46765ca2264add != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:45.860 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:45.860 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:46.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.120 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2039267 00:15:46.120 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:46.120 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.120 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2039267 /var/tmp/host.sock 00:15:46.120 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2039267 ']' 00:15:46.120 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:46.120 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.120 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:46.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:46.120 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.120 21:08:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:46.120 [2024-12-05 21:08:47.393786] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:15:46.120 [2024-12-05 21:08:47.393836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2039267 ] 00:15:46.120 [2024-12-05 21:08:47.489242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.120 [2024-12-05 21:08:47.525251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.061 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.061 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:47.061 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:47.061 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:47.322 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2a4f4467-8715-480a-b312-9c63c576fa01 00:15:47.322 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:47.322 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2A4F44678715480AB3129C63C576FA01 -i 00:15:47.322 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 9621d772-3044-4819-853d-5c8b4de0ce1c 00:15:47.322 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:47.322 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 9621D77230444819853D5C8B4DE0CE1C -i 00:15:47.583 21:08:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:47.583 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:47.869 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:47.869 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:48.128 nvme0n1 00:15:48.128 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:48.128 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:48.389 nvme1n2 00:15:48.389 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:48.389 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:48.389 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:48.389 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:48.389 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:48.649 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:48.649 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:48.650 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:48.650 21:08:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:48.909 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2a4f4467-8715-480a-b312-9c63c576fa01 == \2\a\4\f\4\4\6\7\-\8\7\1\5\-\4\8\0\a\-\b\3\1\2\-\9\c\6\3\c\5\7\6\f\a\0\1 ]] 00:15:48.909 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:48.909 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:48.909 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:48.909 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 9621d772-3044-4819-853d-5c8b4de0ce1c == \9\6\2\1\d\7\7\2\-\3\0\4\4\-\4\8\1\9\-\8\5\3\d\-\5\c\8\b\4\d\e\0\c\e\1\c ]] 00:15:48.909 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:49.169 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:49.430 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 2a4f4467-8715-480a-b312-9c63c576fa01 00:15:49.430 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:49.430 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2A4F44678715480AB3129C63C576FA01 00:15:49.430 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2A4F44678715480AB3129C63C576FA01 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2A4F44678715480AB3129C63C576FA01 00:15:49.431 [2024-12-05 21:08:50.848205] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:49.431 [2024-12-05 21:08:50.848237] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:49.431 [2024-12-05 21:08:50.848247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.431 request: 00:15:49.431 { 00:15:49.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:49.431 "namespace": { 00:15:49.431 "bdev_name": "invalid", 00:15:49.431 "nsid": 1, 00:15:49.431 "nguid": "2A4F44678715480AB3129C63C576FA01", 00:15:49.431 "no_auto_visible": false, 00:15:49.431 "hide_metadata": false 00:15:49.431 }, 00:15:49.431 "method": "nvmf_subsystem_add_ns", 00:15:49.431 "req_id": 1 00:15:49.431 } 00:15:49.431 Got JSON-RPC error response 00:15:49.431 response: 00:15:49.431 { 00:15:49.431 "code": -32602, 00:15:49.431 "message": "Invalid parameters" 00:15:49.431 } 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 2a4f4467-8715-480a-b312-9c63c576fa01 00:15:49.431 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:49.691 21:08:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2A4F44678715480AB3129C63C576FA01 -i 00:15:49.691 21:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2039267 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2039267 ']' 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2039267 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2039267 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2039267' 00:15:52.236 killing process with pid 2039267 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2039267 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2039267 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:52.236 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:52.497 rmmod nvme_tcp 00:15:52.497 rmmod nvme_fabrics 00:15:52.497 rmmod nvme_keyring 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2037040 ']' 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2037040 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2037040 ']' 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2037040 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2037040 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2037040' 00:15:52.497 killing process with pid 2037040 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2037040 00:15:52.497 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2037040 00:15:52.758 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:52.758 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:52.758 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:52.758 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:52.758 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:52.758 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:52.758 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:52.758 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:52.758 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:52.758 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.758 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.758 21:08:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.668 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:54.668 00:15:54.668 real 0m28.646s 00:15:54.668 user 0m31.515s 00:15:54.668 sys 0m8.725s 00:15:54.668 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.668 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:54.668 ************************************ 00:15:54.668 END TEST nvmf_ns_masking 00:15:54.668 ************************************ 00:15:54.668 21:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:54.668 21:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:54.668 21:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.668 21:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.668 21:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.930 ************************************ 00:15:54.930 START TEST nvmf_nvme_cli 00:15:54.930 ************************************ 00:15:54.930 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:54.930 * Looking for test storage... 00:15:54.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.930 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:54.930 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:15:54.930 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:54.930 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:54.930 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.930 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.930 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.930 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.930 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.930 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.930 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:54.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.931 --rc genhtml_branch_coverage=1 00:15:54.931 --rc genhtml_function_coverage=1 00:15:54.931 --rc genhtml_legend=1 00:15:54.931 --rc geninfo_all_blocks=1 00:15:54.931 --rc geninfo_unexecuted_blocks=1 00:15:54.931 00:15:54.931 ' 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:54.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.931 --rc genhtml_branch_coverage=1 00:15:54.931 --rc genhtml_function_coverage=1 00:15:54.931 --rc genhtml_legend=1 00:15:54.931 --rc geninfo_all_blocks=1 00:15:54.931 --rc geninfo_unexecuted_blocks=1 00:15:54.931 00:15:54.931 ' 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:54.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.931 --rc genhtml_branch_coverage=1 00:15:54.931 --rc genhtml_function_coverage=1 00:15:54.931 --rc genhtml_legend=1 00:15:54.931 --rc geninfo_all_blocks=1 00:15:54.931 --rc geninfo_unexecuted_blocks=1 00:15:54.931 00:15:54.931 ' 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:54.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.931 --rc genhtml_branch_coverage=1 00:15:54.931 --rc genhtml_function_coverage=1 00:15:54.931 --rc genhtml_legend=1 00:15:54.931 --rc geninfo_all_blocks=1 00:15:54.931 --rc geninfo_unexecuted_blocks=1 00:15:54.931 00:15:54.931 ' 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:54.931 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:54.932 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:54.932 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.932 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.932 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.932 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:54.932 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:54.932 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:54.932 21:08:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:03.065 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:03.065 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:03.065 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:03.066 Found net devices under 0000:31:00.0: cvl_0_0 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:03.066 Found net devices under 0000:31:00.1: cvl_0_1 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.066 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:03.326 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:03.326 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:03.326 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:03.326 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:03.326 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:03.326 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:03.326 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:03.326 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:03.587 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:03.587 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:03.587 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:03.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:16:03.588 00:16:03.588 --- 10.0.0.2 ping statistics --- 00:16:03.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.588 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:03.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:16:03.588 00:16:03.588 --- 10.0.0.1 ping statistics --- 00:16:03.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.588 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2045469 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2045469 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2045469 ']' 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.588 21:09:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.588 [2024-12-05 21:09:04.907581] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:16:03.588 [2024-12-05 21:09:04.907654] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.588 [2024-12-05 21:09:04.999741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.849 [2024-12-05 21:09:05.043856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.849 [2024-12-05 21:09:05.043902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.849 [2024-12-05 21:09:05.043911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.849 [2024-12-05 21:09:05.043918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.849 [2024-12-05 21:09:05.043924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.849 [2024-12-05 21:09:05.045580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.849 [2024-12-05 21:09:05.045694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.849 [2024-12-05 21:09:05.045853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.849 [2024-12-05 21:09:05.045853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.420 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.420 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:04.421 [2024-12-05 21:09:05.758199] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:04.421 Malloc0 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:04.421 Malloc1 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.421 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:04.682 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.682 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.682 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.683 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:04.683 [2024-12-05 21:09:05.863761] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.683 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.683 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:04.683 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.683 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:04.683 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.683 21:09:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:16:04.683 00:16:04.683 Discovery Log Number of Records 2, Generation counter 2 00:16:04.683 =====Discovery Log Entry 0====== 00:16:04.683 trtype: tcp 00:16:04.683 adrfam: ipv4 00:16:04.683 subtype: current discovery subsystem 00:16:04.683 treq: not required 00:16:04.683 portid: 0 00:16:04.683 trsvcid: 4420 00:16:04.683 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:04.683 traddr: 10.0.0.2 00:16:04.683 eflags: explicit discovery connections, duplicate discovery information 00:16:04.683 sectype: none 00:16:04.683 =====Discovery Log Entry 1====== 00:16:04.683 trtype: tcp 00:16:04.683 adrfam: ipv4 00:16:04.683 subtype: nvme subsystem 00:16:04.683 treq: not required 00:16:04.683 portid: 0 00:16:04.683 trsvcid: 4420 00:16:04.683 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:04.683 traddr: 10.0.0.2 00:16:04.683 eflags: none 00:16:04.683 sectype: none 00:16:04.683 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:04.683 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:04.683 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:04.683 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.683 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:04.683 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:04.683 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.683 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:04.683 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:04.683 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:04.683 21:09:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:06.743 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:06.743 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:16:06.743 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:06.743 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:06.743 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:06.743 21:09:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:08.659 /dev/nvme0n2 ]] 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:08.659 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:08.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:08.660 rmmod nvme_tcp 00:16:08.660 rmmod nvme_fabrics 00:16:08.660 rmmod nvme_keyring 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2045469 ']' 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2045469 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2045469 ']' 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2045469 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2045469 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2045469' 00:16:08.660 killing process with pid 2045469 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2045469 00:16:08.660 21:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2045469 00:16:08.920 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:08.920 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:08.920 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:08.920 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:08.920 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:16:08.920 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:08.920 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:16:08.920 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:08.920 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:08.920 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.920 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.920 21:09:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.831 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:10.831 00:16:10.831 real 0m16.119s 00:16:10.831 user 0m22.935s 00:16:10.831 sys 0m6.984s 00:16:10.831 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.831 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.831 ************************************ 00:16:10.831 END TEST nvmf_nvme_cli 00:16:10.831 ************************************ 00:16:11.092 21:09:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:11.092 21:09:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:11.092 21:09:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:11.092 21:09:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.092 21:09:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:11.092 ************************************ 00:16:11.092 START TEST nvmf_vfio_user 00:16:11.092 ************************************ 00:16:11.092 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:11.092 * Looking for test storage... 00:16:11.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.092 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:11.092 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:16:11.092 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:11.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.093 --rc genhtml_branch_coverage=1 00:16:11.093 --rc genhtml_function_coverage=1 00:16:11.093 --rc genhtml_legend=1 00:16:11.093 --rc geninfo_all_blocks=1 00:16:11.093 --rc geninfo_unexecuted_blocks=1 00:16:11.093 00:16:11.093 ' 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:11.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.093 --rc genhtml_branch_coverage=1 00:16:11.093 --rc genhtml_function_coverage=1 00:16:11.093 --rc genhtml_legend=1 00:16:11.093 --rc geninfo_all_blocks=1 00:16:11.093 --rc geninfo_unexecuted_blocks=1 00:16:11.093 00:16:11.093 ' 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:11.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.093 --rc genhtml_branch_coverage=1 00:16:11.093 --rc genhtml_function_coverage=1 00:16:11.093 --rc genhtml_legend=1 00:16:11.093 --rc geninfo_all_blocks=1 00:16:11.093 --rc geninfo_unexecuted_blocks=1 00:16:11.093 00:16:11.093 ' 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:11.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.093 --rc genhtml_branch_coverage=1 00:16:11.093 --rc genhtml_function_coverage=1 00:16:11.093 --rc genhtml_legend=1 00:16:11.093 --rc geninfo_all_blocks=1 00:16:11.093 --rc geninfo_unexecuted_blocks=1 00:16:11.093 00:16:11.093 ' 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.093 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:11.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2047687 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2047687' 00:16:11.354 Process pid: 2047687 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2047687 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2047687 ']' 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:11.354 21:09:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:11.354 [2024-12-05 21:09:12.617275] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:16:11.354 [2024-12-05 21:09:12.617354] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.354 [2024-12-05 21:09:12.701832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:11.354 [2024-12-05 21:09:12.743304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.354 [2024-12-05 21:09:12.743340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.355 [2024-12-05 21:09:12.743348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.355 [2024-12-05 21:09:12.743355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.355 [2024-12-05 21:09:12.743361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.355 [2024-12-05 21:09:12.744924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.355 [2024-12-05 21:09:12.745212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.355 [2024-12-05 21:09:12.745370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:11.355 [2024-12-05 21:09:12.745371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.295 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.295 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:12.295 21:09:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:13.237 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:13.237 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:13.237 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:13.237 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:13.237 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:13.238 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:13.498 Malloc1 00:16:13.498 21:09:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:13.759 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:13.759 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:14.019 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:14.019 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:14.019 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:14.278 Malloc2 00:16:14.278 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:14.537 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:14.537 21:09:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:14.797 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:14.797 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:14.797 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:14.797 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:14.797 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:14.797 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:14.797 [2024-12-05 21:09:16.145906] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:16:14.797 [2024-12-05 21:09:16.145951] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2048381 ] 00:16:14.797 [2024-12-05 21:09:16.197460] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:14.797 [2024-12-05 21:09:16.206194] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:14.797 [2024-12-05 21:09:16.206218] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f950cd17000 00:16:14.797 [2024-12-05 21:09:16.207195] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:14.797 [2024-12-05 21:09:16.208191] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:14.797 [2024-12-05 21:09:16.209200] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:14.797 [2024-12-05 21:09:16.210205] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:14.797 [2024-12-05 21:09:16.211202] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:14.797 [2024-12-05 21:09:16.212217] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:14.797 [2024-12-05 21:09:16.213221] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:14.797 [2024-12-05 21:09:16.214223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:14.797 [2024-12-05 21:09:16.215236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:14.798 [2024-12-05 21:09:16.215250] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f950cd0c000 00:16:14.798 [2024-12-05 21:09:16.216577] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:15.059 [2024-12-05 21:09:16.237485] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:15.059 [2024-12-05 21:09:16.237517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:15.059 [2024-12-05 21:09:16.240379] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:15.059 [2024-12-05 21:09:16.240421] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:15.059 [2024-12-05 21:09:16.240504] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:15.059 [2024-12-05 21:09:16.240518] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:15.059 [2024-12-05 21:09:16.240523] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:15.059 [2024-12-05 21:09:16.241376] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:15.059 [2024-12-05 21:09:16.241386] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:15.059 [2024-12-05 21:09:16.241394] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:15.059 [2024-12-05 21:09:16.242379] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:15.059 [2024-12-05 21:09:16.242388] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:15.059 [2024-12-05 21:09:16.242396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:15.059 [2024-12-05 21:09:16.243385] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:15.059 [2024-12-05 21:09:16.243394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:15.059 [2024-12-05 21:09:16.244385] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:15.059 [2024-12-05 21:09:16.244393] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:15.059 [2024-12-05 21:09:16.244398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:15.059 [2024-12-05 21:09:16.244405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:15.059 [2024-12-05 21:09:16.244513] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:15.059 [2024-12-05 21:09:16.244518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:15.059 [2024-12-05 21:09:16.244523] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:15.059 [2024-12-05 21:09:16.245393] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:15.059 [2024-12-05 21:09:16.246397] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:15.059 [2024-12-05 21:09:16.247409] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:15.060 [2024-12-05 21:09:16.248404] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:15.060 [2024-12-05 21:09:16.248459] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:15.060 [2024-12-05 21:09:16.249422] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:15.060 [2024-12-05 21:09:16.249430] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:15.060 [2024-12-05 21:09:16.249436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.249457] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:15.060 [2024-12-05 21:09:16.249468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.249489] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:15.060 [2024-12-05 21:09:16.249494] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:15.060 [2024-12-05 21:09:16.249498] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:15.060 [2024-12-05 21:09:16.249510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:15.060 [2024-12-05 21:09:16.249547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:15.060 [2024-12-05 21:09:16.249556] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:15.060 [2024-12-05 21:09:16.249563] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:15.060 [2024-12-05 21:09:16.249568] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:15.060 [2024-12-05 21:09:16.249573] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:15.060 [2024-12-05 21:09:16.249577] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:15.060 [2024-12-05 21:09:16.249582] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:15.060 [2024-12-05 21:09:16.249587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.249595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.249605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:15.060 [2024-12-05 21:09:16.249614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:15.060 [2024-12-05 21:09:16.249625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.060 [2024-12-05 21:09:16.249636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.060 [2024-12-05 21:09:16.249644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.060 [2024-12-05 21:09:16.249653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.060 [2024-12-05 21:09:16.249658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.249666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.249676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:15.060 [2024-12-05 21:09:16.249685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:15.060 [2024-12-05 21:09:16.249691] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:15.060 [2024-12-05 21:09:16.249696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.249703] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.249709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.249718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:15.060 [2024-12-05 21:09:16.249730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:15.060 [2024-12-05 21:09:16.249791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.249799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.249807] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:15.060 [2024-12-05 21:09:16.249811] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:15.060 [2024-12-05 21:09:16.249815] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:15.060 [2024-12-05 21:09:16.249821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:15.060 [2024-12-05 21:09:16.249833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:15.060 [2024-12-05 21:09:16.249842] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:15.060 [2024-12-05 21:09:16.249851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.249859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.249949] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:15.060 [2024-12-05 21:09:16.249954] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:15.060 [2024-12-05 21:09:16.249959] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:15.060 [2024-12-05 21:09:16.249965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:15.060 [2024-12-05 21:09:16.249984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:15.060 [2024-12-05 21:09:16.249997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.250005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.250013] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:15.060 [2024-12-05 21:09:16.250017] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:15.060 [2024-12-05 21:09:16.250020] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:15.060 [2024-12-05 21:09:16.250026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:15.060 [2024-12-05 21:09:16.250036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:15.060 [2024-12-05 21:09:16.250044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.250051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.250058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.250066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.250072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.250077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.250082] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:15.060 [2024-12-05 21:09:16.250087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:15.060 [2024-12-05 21:09:16.250092] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:15.060 [2024-12-05 21:09:16.250109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:15.060 [2024-12-05 21:09:16.250120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:15.060 [2024-12-05 21:09:16.250131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:15.060 [2024-12-05 21:09:16.250138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:15.060 [2024-12-05 21:09:16.250149] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:15.060 [2024-12-05 21:09:16.250159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:15.060 [2024-12-05 21:09:16.250172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:15.060 [2024-12-05 21:09:16.250182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:15.060 [2024-12-05 21:09:16.250196] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:15.060 [2024-12-05 21:09:16.250201] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:15.060 [2024-12-05 21:09:16.250205] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:15.060 [2024-12-05 21:09:16.250209] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:15.060 [2024-12-05 21:09:16.250212] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:15.061 [2024-12-05 21:09:16.250218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:15.061 [2024-12-05 21:09:16.250226] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:15.061 [2024-12-05 21:09:16.250230] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:15.061 [2024-12-05 21:09:16.250234] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:15.061 [2024-12-05 21:09:16.250240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:15.061 [2024-12-05 21:09:16.250247] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:15.061 [2024-12-05 21:09:16.250252] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:15.061 [2024-12-05 21:09:16.250255] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:15.061 [2024-12-05 21:09:16.250261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:15.061 [2024-12-05 21:09:16.250269] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:15.061 [2024-12-05 21:09:16.250273] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:15.061 [2024-12-05 21:09:16.250276] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:15.061 [2024-12-05 21:09:16.250282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:15.061 [2024-12-05 21:09:16.250290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:15.061 [2024-12-05 21:09:16.250301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:15.061 [2024-12-05 21:09:16.250332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:15.061 [2024-12-05 21:09:16.250340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:15.061 ===================================================== 00:16:15.061 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:15.061 ===================================================== 00:16:15.061 Controller Capabilities/Features 00:16:15.061 ================================ 00:16:15.061 Vendor ID: 4e58 00:16:15.061 Subsystem Vendor ID: 4e58 00:16:15.061 Serial Number: SPDK1 00:16:15.061 Model Number: SPDK bdev Controller 00:16:15.061 Firmware Version: 25.01 00:16:15.061 Recommended Arb Burst: 6 00:16:15.061 IEEE OUI Identifier: 8d 6b 50 00:16:15.061 Multi-path I/O 00:16:15.061 May have multiple subsystem ports: Yes 00:16:15.061 May have multiple controllers: Yes 00:16:15.061 Associated with SR-IOV VF: No 00:16:15.061 Max Data Transfer Size: 131072 00:16:15.061 Max Number of Namespaces: 32 00:16:15.061 Max Number of I/O Queues: 127 00:16:15.061 NVMe Specification Version (VS): 1.3 00:16:15.061 NVMe Specification Version (Identify): 1.3 00:16:15.061 Maximum Queue Entries: 256 00:16:15.061 Contiguous Queues Required: Yes 00:16:15.061 Arbitration Mechanisms Supported 00:16:15.061 Weighted Round Robin: Not Supported 00:16:15.061 Vendor Specific: Not Supported 00:16:15.061 Reset Timeout: 15000 ms 00:16:15.061 Doorbell Stride: 4 bytes 00:16:15.061 NVM Subsystem Reset: Not Supported 00:16:15.061 Command Sets Supported 00:16:15.061 NVM Command Set: Supported 00:16:15.061 Boot Partition: Not Supported 00:16:15.061 Memory Page Size Minimum: 4096 bytes 00:16:15.061 Memory Page Size Maximum: 4096 bytes 00:16:15.061 Persistent Memory Region: Not Supported 00:16:15.061 Optional Asynchronous Events Supported 00:16:15.061 Namespace Attribute Notices: Supported 00:16:15.061 Firmware Activation Notices: Not Supported 00:16:15.061 ANA Change Notices: Not Supported 00:16:15.061 PLE Aggregate Log Change Notices: Not Supported 00:16:15.061 LBA Status Info Alert Notices: Not Supported 00:16:15.061 EGE Aggregate Log Change Notices: Not Supported 00:16:15.061 Normal NVM Subsystem Shutdown event: Not Supported 00:16:15.061 Zone Descriptor Change Notices: Not Supported 00:16:15.061 Discovery Log Change Notices: Not Supported 00:16:15.061 Controller Attributes 00:16:15.061 128-bit Host Identifier: Supported 00:16:15.061 Non-Operational Permissive Mode: Not Supported 00:16:15.061 NVM Sets: Not Supported 00:16:15.061 Read Recovery Levels: Not Supported 00:16:15.061 Endurance Groups: Not Supported 00:16:15.061 Predictable Latency Mode: Not Supported 00:16:15.061 Traffic Based Keep ALive: Not Supported 00:16:15.061 Namespace Granularity: Not Supported 00:16:15.061 SQ Associations: Not Supported 00:16:15.061 UUID List: Not Supported 00:16:15.061 Multi-Domain Subsystem: Not Supported 00:16:15.061 Fixed Capacity Management: Not Supported 00:16:15.061 Variable Capacity Management: Not Supported 00:16:15.061 Delete Endurance Group: Not Supported 00:16:15.061 Delete NVM Set: Not Supported 00:16:15.061 Extended LBA Formats Supported: Not Supported 00:16:15.061 Flexible Data Placement Supported: Not Supported 00:16:15.061 00:16:15.061 Controller Memory Buffer Support 00:16:15.061 ================================ 00:16:15.061 Supported: No 00:16:15.061 00:16:15.061 Persistent Memory Region Support 00:16:15.061 ================================ 00:16:15.061 Supported: No 00:16:15.061 00:16:15.061 Admin Command Set Attributes 00:16:15.061 ============================ 00:16:15.061 Security Send/Receive: Not Supported 00:16:15.061 Format NVM: Not Supported 00:16:15.061 Firmware Activate/Download: Not Supported 00:16:15.061 Namespace Management: Not Supported 00:16:15.061 Device Self-Test: Not Supported 00:16:15.061 Directives: Not Supported 00:16:15.061 NVMe-MI: Not Supported 00:16:15.061 Virtualization Management: Not Supported 00:16:15.061 Doorbell Buffer Config: Not Supported 00:16:15.061 Get LBA Status Capability: Not Supported 00:16:15.061 Command & Feature Lockdown Capability: Not Supported 00:16:15.061 Abort Command Limit: 4 00:16:15.061 Async Event Request Limit: 4 00:16:15.061 Number of Firmware Slots: N/A 00:16:15.061 Firmware Slot 1 Read-Only: N/A 00:16:15.061 Firmware Activation Without Reset: N/A 00:16:15.061 Multiple Update Detection Support: N/A 00:16:15.061 Firmware Update Granularity: No Information Provided 00:16:15.061 Per-Namespace SMART Log: No 00:16:15.061 Asymmetric Namespace Access Log Page: Not Supported 00:16:15.061 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:15.061 Command Effects Log Page: Supported 00:16:15.061 Get Log Page Extended Data: Supported 00:16:15.061 Telemetry Log Pages: Not Supported 00:16:15.061 Persistent Event Log Pages: Not Supported 00:16:15.061 Supported Log Pages Log Page: May Support 00:16:15.061 Commands Supported & Effects Log Page: Not Supported 00:16:15.061 Feature Identifiers & Effects Log Page:May Support 00:16:15.061 NVMe-MI Commands & Effects Log Page: May Support 00:16:15.061 Data Area 4 for Telemetry Log: Not Supported 00:16:15.061 Error Log Page Entries Supported: 128 00:16:15.061 Keep Alive: Supported 00:16:15.061 Keep Alive Granularity: 10000 ms 00:16:15.061 00:16:15.061 NVM Command Set Attributes 00:16:15.061 ========================== 00:16:15.061 Submission Queue Entry Size 00:16:15.061 Max: 64 00:16:15.061 Min: 64 00:16:15.061 Completion Queue Entry Size 00:16:15.061 Max: 16 00:16:15.061 Min: 16 00:16:15.061 Number of Namespaces: 32 00:16:15.061 Compare Command: Supported 00:16:15.061 Write Uncorrectable Command: Not Supported 00:16:15.061 Dataset Management Command: Supported 00:16:15.061 Write Zeroes Command: Supported 00:16:15.061 Set Features Save Field: Not Supported 00:16:15.061 Reservations: Not Supported 00:16:15.061 Timestamp: Not Supported 00:16:15.061 Copy: Supported 00:16:15.061 Volatile Write Cache: Present 00:16:15.061 Atomic Write Unit (Normal): 1 00:16:15.061 Atomic Write Unit (PFail): 1 00:16:15.061 Atomic Compare & Write Unit: 1 00:16:15.061 Fused Compare & Write: Supported 00:16:15.061 Scatter-Gather List 00:16:15.061 SGL Command Set: Supported (Dword aligned) 00:16:15.061 SGL Keyed: Not Supported 00:16:15.061 SGL Bit Bucket Descriptor: Not Supported 00:16:15.061 SGL Metadata Pointer: Not Supported 00:16:15.061 Oversized SGL: Not Supported 00:16:15.061 SGL Metadata Address: Not Supported 00:16:15.061 SGL Offset: Not Supported 00:16:15.061 Transport SGL Data Block: Not Supported 00:16:15.061 Replay Protected Memory Block: Not Supported 00:16:15.061 00:16:15.061 Firmware Slot Information 00:16:15.061 ========================= 00:16:15.061 Active slot: 1 00:16:15.061 Slot 1 Firmware Revision: 25.01 00:16:15.061 00:16:15.061 00:16:15.061 Commands Supported and Effects 00:16:15.061 ============================== 00:16:15.061 Admin Commands 00:16:15.061 -------------- 00:16:15.061 Get Log Page (02h): Supported 00:16:15.061 Identify (06h): Supported 00:16:15.061 Abort (08h): Supported 00:16:15.061 Set Features (09h): Supported 00:16:15.061 Get Features (0Ah): Supported 00:16:15.061 Asynchronous Event Request (0Ch): Supported 00:16:15.061 Keep Alive (18h): Supported 00:16:15.061 I/O Commands 00:16:15.061 ------------ 00:16:15.062 Flush (00h): Supported LBA-Change 00:16:15.062 Write (01h): Supported LBA-Change 00:16:15.062 Read (02h): Supported 00:16:15.062 Compare (05h): Supported 00:16:15.062 Write Zeroes (08h): Supported LBA-Change 00:16:15.062 Dataset Management (09h): Supported LBA-Change 00:16:15.062 Copy (19h): Supported LBA-Change 00:16:15.062 00:16:15.062 Error Log 00:16:15.062 ========= 00:16:15.062 00:16:15.062 Arbitration 00:16:15.062 =========== 00:16:15.062 Arbitration Burst: 1 00:16:15.062 00:16:15.062 Power Management 00:16:15.062 ================ 00:16:15.062 Number of Power States: 1 00:16:15.062 Current Power State: Power State #0 00:16:15.062 Power State #0: 00:16:15.062 Max Power: 0.00 W 00:16:15.062 Non-Operational State: Operational 00:16:15.062 Entry Latency: Not Reported 00:16:15.062 Exit Latency: Not Reported 00:16:15.062 Relative Read Throughput: 0 00:16:15.062 Relative Read Latency: 0 00:16:15.062 Relative Write Throughput: 0 00:16:15.062 Relative Write Latency: 0 00:16:15.062 Idle Power: Not Reported 00:16:15.062 Active Power: Not Reported 00:16:15.062 Non-Operational Permissive Mode: Not Supported 00:16:15.062 00:16:15.062 Health Information 00:16:15.062 ================== 00:16:15.062 Critical Warnings: 00:16:15.062 Available Spare Space: OK 00:16:15.062 Temperature: OK 00:16:15.062 Device Reliability: OK 00:16:15.062 Read Only: No 00:16:15.062 Volatile Memory Backup: OK 00:16:15.062 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:15.062 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:15.062 Available Spare: 0% 00:16:15.062 Available Sp[2024-12-05 21:09:16.250441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:15.062 [2024-12-05 21:09:16.250449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:15.062 [2024-12-05 21:09:16.250479] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:15.062 [2024-12-05 21:09:16.250489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.062 [2024-12-05 21:09:16.250495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.062 [2024-12-05 21:09:16.250505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.062 [2024-12-05 21:09:16.250512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.062 [2024-12-05 21:09:16.251435] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:15.062 [2024-12-05 21:09:16.251446] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:15.062 [2024-12-05 21:09:16.252431] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:15.062 [2024-12-05 21:09:16.252474] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:15.062 [2024-12-05 21:09:16.252480] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:15.062 [2024-12-05 21:09:16.253444] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:15.062 [2024-12-05 21:09:16.253456] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:15.062 [2024-12-05 21:09:16.253516] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:15.062 [2024-12-05 21:09:16.257869] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:15.062 are Threshold: 0% 00:16:15.062 Life Percentage Used: 0% 00:16:15.062 Data Units Read: 0 00:16:15.062 Data Units Written: 0 00:16:15.062 Host Read Commands: 0 00:16:15.062 Host Write Commands: 0 00:16:15.062 Controller Busy Time: 0 minutes 00:16:15.062 Power Cycles: 0 00:16:15.062 Power On Hours: 0 hours 00:16:15.062 Unsafe Shutdowns: 0 00:16:15.062 Unrecoverable Media Errors: 0 00:16:15.062 Lifetime Error Log Entries: 0 00:16:15.062 Warning Temperature Time: 0 minutes 00:16:15.062 Critical Temperature Time: 0 minutes 00:16:15.062 00:16:15.062 Number of Queues 00:16:15.062 ================ 00:16:15.062 Number of I/O Submission Queues: 127 00:16:15.062 Number of I/O Completion Queues: 127 00:16:15.062 00:16:15.062 Active Namespaces 00:16:15.062 ================= 00:16:15.062 Namespace ID:1 00:16:15.062 Error Recovery Timeout: Unlimited 00:16:15.062 Command Set Identifier: NVM (00h) 00:16:15.062 Deallocate: Supported 00:16:15.062 Deallocated/Unwritten Error: Not Supported 00:16:15.062 Deallocated Read Value: Unknown 00:16:15.062 Deallocate in Write Zeroes: Not Supported 00:16:15.062 Deallocated Guard Field: 0xFFFF 00:16:15.062 Flush: Supported 00:16:15.062 Reservation: Supported 00:16:15.062 Namespace Sharing Capabilities: Multiple Controllers 00:16:15.062 Size (in LBAs): 131072 (0GiB) 00:16:15.062 Capacity (in LBAs): 131072 (0GiB) 00:16:15.062 Utilization (in LBAs): 131072 (0GiB) 00:16:15.062 NGUID: 9B10EBD7FD2544439783A4208F52E9FA 00:16:15.062 UUID: 9b10ebd7-fd25-4443-9783-a4208f52e9fa 00:16:15.062 Thin Provisioning: Not Supported 00:16:15.062 Per-NS Atomic Units: Yes 00:16:15.062 Atomic Boundary Size (Normal): 0 00:16:15.062 Atomic Boundary Size (PFail): 0 00:16:15.062 Atomic Boundary Offset: 0 00:16:15.062 Maximum Single Source Range Length: 65535 00:16:15.062 Maximum Copy Length: 65535 00:16:15.062 Maximum Source Range Count: 1 00:16:15.062 NGUID/EUI64 Never Reused: No 00:16:15.062 Namespace Write Protected: No 00:16:15.062 Number of LBA Formats: 1 00:16:15.062 Current LBA Format: LBA Format #00 00:16:15.062 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:15.062 00:16:15.062 21:09:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:15.062 [2024-12-05 21:09:16.455543] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:20.342 Initializing NVMe Controllers 00:16:20.342 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:20.342 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:20.342 Initialization complete. Launching workers. 00:16:20.342 ======================================================== 00:16:20.342 Latency(us) 00:16:20.343 Device Information : IOPS MiB/s Average min max 00:16:20.343 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39990.80 156.21 3200.98 871.64 9755.17 00:16:20.343 ======================================================== 00:16:20.343 Total : 39990.80 156.21 3200.98 871.64 9755.17 00:16:20.343 00:16:20.343 [2024-12-05 21:09:21.472926] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:20.343 21:09:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:20.343 [2024-12-05 21:09:21.667835] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:25.621 Initializing NVMe Controllers 00:16:25.622 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:25.622 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:25.622 Initialization complete. Launching workers. 00:16:25.622 ======================================================== 00:16:25.622 Latency(us) 00:16:25.622 Device Information : IOPS MiB/s Average min max 00:16:25.622 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16055.85 62.72 7977.73 6231.25 8951.61 00:16:25.622 ======================================================== 00:16:25.622 Total : 16055.85 62.72 7977.73 6231.25 8951.61 00:16:25.622 00:16:25.622 [2024-12-05 21:09:26.708635] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:25.622 21:09:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:25.622 [2024-12-05 21:09:26.919548] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:30.913 [2024-12-05 21:09:31.988045] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:30.913 Initializing NVMe Controllers 00:16:30.913 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:30.913 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:30.913 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:30.913 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:30.913 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:30.913 Initialization complete. Launching workers. 00:16:30.913 Starting thread on core 2 00:16:30.913 Starting thread on core 3 00:16:30.913 Starting thread on core 1 00:16:30.913 21:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:30.913 [2024-12-05 21:09:32.283070] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:34.209 [2024-12-05 21:09:35.405131] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:34.209 Initializing NVMe Controllers 00:16:34.209 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:34.209 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:34.209 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:34.209 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:34.209 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:34.209 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:34.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:34.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:34.209 Initialization complete. Launching workers. 00:16:34.209 Starting thread on core 1 with urgent priority queue 00:16:34.209 Starting thread on core 2 with urgent priority queue 00:16:34.209 Starting thread on core 3 with urgent priority queue 00:16:34.209 Starting thread on core 0 with urgent priority queue 00:16:34.209 SPDK bdev Controller (SPDK1 ) core 0: 9155.00 IO/s 10.92 secs/100000 ios 00:16:34.209 SPDK bdev Controller (SPDK1 ) core 1: 9144.67 IO/s 10.94 secs/100000 ios 00:16:34.209 SPDK bdev Controller (SPDK1 ) core 2: 6493.00 IO/s 15.40 secs/100000 ios 00:16:34.209 SPDK bdev Controller (SPDK1 ) core 3: 9703.33 IO/s 10.31 secs/100000 ios 00:16:34.209 ======================================================== 00:16:34.209 00:16:34.210 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:34.469 [2024-12-05 21:09:35.700301] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:34.469 Initializing NVMe Controllers 00:16:34.469 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:34.469 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:34.469 Namespace ID: 1 size: 0GB 00:16:34.469 Initialization complete. 00:16:34.469 INFO: using host memory buffer for IO 00:16:34.469 Hello world! 00:16:34.469 [2024-12-05 21:09:35.737551] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:34.469 21:09:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:34.729 [2024-12-05 21:09:36.035278] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:35.669 Initializing NVMe Controllers 00:16:35.669 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:35.669 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:35.669 Initialization complete. Launching workers. 00:16:35.669 submit (in ns) avg, min, max = 8412.0, 3946.7, 4995610.8 00:16:35.669 complete (in ns) avg, min, max = 19196.6, 2365.0, 5993401.7 00:16:35.669 00:16:35.669 Submit histogram 00:16:35.669 ================ 00:16:35.669 Range in us Cumulative Count 00:16:35.669 3.947 - 3.973: 1.4742% ( 277) 00:16:35.669 3.973 - 4.000: 5.9127% ( 834) 00:16:35.669 4.000 - 4.027: 15.2581% ( 1756) 00:16:35.669 4.027 - 4.053: 26.7004% ( 2150) 00:16:35.669 4.053 - 4.080: 39.8882% ( 2478) 00:16:35.669 4.080 - 4.107: 54.5077% ( 2747) 00:16:35.669 4.107 - 4.133: 70.7398% ( 3050) 00:16:35.669 4.133 - 4.160: 83.8638% ( 2466) 00:16:35.669 4.160 - 4.187: 92.2033% ( 1567) 00:16:35.669 4.187 - 4.213: 96.5886% ( 824) 00:16:35.669 4.213 - 4.240: 98.3928% ( 339) 00:16:35.669 4.240 - 4.267: 99.1325% ( 139) 00:16:35.669 4.267 - 4.293: 99.3933% ( 49) 00:16:35.669 4.293 - 4.320: 99.4465% ( 10) 00:16:35.669 4.320 - 4.347: 99.4678% ( 4) 00:16:35.669 4.453 - 4.480: 99.4731% ( 1) 00:16:35.669 4.507 - 4.533: 99.4784% ( 1) 00:16:35.669 4.667 - 4.693: 99.4838% ( 1) 00:16:35.669 4.800 - 4.827: 99.4891% ( 1) 00:16:35.669 4.987 - 5.013: 99.4944% ( 1) 00:16:35.669 5.173 - 5.200: 99.4997% ( 1) 00:16:35.669 5.253 - 5.280: 99.5051% ( 1) 00:16:35.669 5.413 - 5.440: 99.5104% ( 1) 00:16:35.669 5.520 - 5.547: 99.5157% ( 1) 00:16:35.669 5.547 - 5.573: 99.5210% ( 1) 00:16:35.669 5.893 - 5.920: 99.5317% ( 2) 00:16:35.669 5.973 - 6.000: 99.5423% ( 2) 00:16:35.669 6.053 - 6.080: 99.5530% ( 2) 00:16:35.669 6.080 - 6.107: 99.5742% ( 4) 00:16:35.669 6.107 - 6.133: 99.5796% ( 1) 00:16:35.669 6.133 - 6.160: 99.5849% ( 1) 00:16:35.669 6.160 - 6.187: 99.6009% ( 3) 00:16:35.669 6.187 - 6.213: 99.6062% ( 1) 00:16:35.669 6.213 - 6.240: 99.6115% ( 1) 00:16:35.669 6.240 - 6.267: 99.6168% ( 1) 00:16:35.669 6.293 - 6.320: 99.6221% ( 1) 00:16:35.669 6.320 - 6.347: 99.6328% ( 2) 00:16:35.669 6.373 - 6.400: 99.6434% ( 2) 00:16:35.669 6.427 - 6.453: 99.6541% ( 2) 00:16:35.669 6.453 - 6.480: 99.6594% ( 1) 00:16:35.669 6.480 - 6.507: 99.6647% ( 1) 00:16:35.669 6.507 - 6.533: 99.6700% ( 1) 00:16:35.669 6.533 - 6.560: 99.6754% ( 1) 00:16:35.669 6.560 - 6.587: 99.6807% ( 1) 00:16:35.669 6.640 - 6.667: 99.6913% ( 2) 00:16:35.669 6.667 - 6.693: 99.7020% ( 2) 00:16:35.669 6.720 - 6.747: 99.7073% ( 1) 00:16:35.669 6.747 - 6.773: 99.7126% ( 1) 00:16:35.669 6.773 - 6.800: 99.7179% ( 1) 00:16:35.669 6.933 - 6.987: 99.7339% ( 3) 00:16:35.669 6.987 - 7.040: 99.7392% ( 1) 00:16:35.669 7.040 - 7.093: 99.7552% ( 3) 00:16:35.669 7.093 - 7.147: 99.7605% ( 1) 00:16:35.669 7.147 - 7.200: 99.7658% ( 1) 00:16:35.669 7.200 - 7.253: 99.7712% ( 1) 00:16:35.669 7.307 - 7.360: 99.7818% ( 2) 00:16:35.669 7.360 - 7.413: 99.7978% ( 3) 00:16:35.669 7.467 - 7.520: 99.8031% ( 1) 00:16:35.669 7.520 - 7.573: 99.8084% ( 1) 00:16:35.669 7.573 - 7.627: 99.8137% ( 1) 00:16:35.669 7.627 - 7.680: 99.8244% ( 2) 00:16:35.669 7.680 - 7.733: 99.8350% ( 2) 00:16:35.669 7.733 - 7.787: 99.8403% ( 1) 00:16:35.669 7.787 - 7.840: 99.8510% ( 2) 00:16:35.669 7.840 - 7.893: 99.8563% ( 1) 00:16:35.669 7.947 - 8.000: 99.8616% ( 1) 00:16:35.669 8.000 - 8.053: 99.8670% ( 1) 00:16:35.669 8.053 - 8.107: 99.8776% ( 2) 00:16:35.669 8.107 - 8.160: 99.8829% ( 1) 00:16:35.669 8.267 - 8.320: 99.8882% ( 1) 00:16:35.669 12.747 - 12.800: 99.8936% ( 1) 00:16:35.669 3986.773 - 4014.080: 99.9947% ( 19) 00:16:35.669 4969.813 - 4997.120: 100.0000% ( 1) 00:16:35.669 00:16:35.669 Complete histogram 00:16:35.669 ================== 00:16:35.669 Range in us Cumulative Count 00:16:35.669 2.360 - 2.373: 0.0106% ( 2) 00:16:35.669 2.373 - 2.387: 0.0639% ( 10) 00:16:35.669 2.387 - 2.400: 1.7882% ( 324) 00:16:35.669 2.400 - 2.413: 1.8787% ( 17) 00:16:35.669 2.413 - [2024-12-05 21:09:37.058739] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:35.930 2.427: 2.3736% ( 93) 00:16:35.930 2.427 - 2.440: 2.4854% ( 21) 00:16:35.930 2.440 - 2.453: 37.4295% ( 6566) 00:16:35.930 2.453 - 2.467: 51.1868% ( 2585) 00:16:35.930 2.467 - 2.480: 67.4135% ( 3049) 00:16:35.930 2.480 - 2.493: 76.6791% ( 1741) 00:16:35.930 2.493 - 2.507: 80.5641% ( 730) 00:16:35.930 2.507 - 2.520: 83.4965% ( 551) 00:16:35.930 2.520 - 2.533: 89.2017% ( 1072) 00:16:35.930 2.533 - 2.547: 93.0814% ( 729) 00:16:35.930 2.547 - 2.560: 96.3332% ( 611) 00:16:35.930 2.560 - 2.573: 98.4300% ( 394) 00:16:35.930 2.573 - 2.587: 99.0953% ( 125) 00:16:35.930 2.587 - 2.600: 99.2815% ( 35) 00:16:35.930 2.600 - 2.613: 99.3560% ( 14) 00:16:35.930 2.613 - 2.627: 99.3827% ( 5) 00:16:35.930 4.133 - 4.160: 99.3880% ( 1) 00:16:35.930 4.293 - 4.320: 99.3933% ( 1) 00:16:35.930 4.347 - 4.373: 99.4039% ( 2) 00:16:35.930 4.373 - 4.400: 99.4146% ( 2) 00:16:35.930 4.400 - 4.427: 99.4199% ( 1) 00:16:35.930 4.453 - 4.480: 99.4252% ( 1) 00:16:35.930 4.587 - 4.613: 99.4305% ( 1) 00:16:35.930 4.640 - 4.667: 99.4359% ( 1) 00:16:35.930 4.667 - 4.693: 99.4412% ( 1) 00:16:35.930 4.693 - 4.720: 99.4465% ( 1) 00:16:35.930 4.747 - 4.773: 99.4518% ( 1) 00:16:35.930 4.800 - 4.827: 99.4572% ( 1) 00:16:35.930 4.907 - 4.933: 99.4625% ( 1) 00:16:35.930 4.933 - 4.960: 99.4731% ( 2) 00:16:35.930 4.960 - 4.987: 99.4891% ( 3) 00:16:35.930 5.013 - 5.040: 99.4944% ( 1) 00:16:35.930 5.040 - 5.067: 99.5051% ( 2) 00:16:35.930 5.200 - 5.227: 99.5104% ( 1) 00:16:35.930 5.227 - 5.253: 99.5157% ( 1) 00:16:35.930 5.253 - 5.280: 99.5210% ( 1) 00:16:35.930 5.333 - 5.360: 99.5263% ( 1) 00:16:35.930 5.360 - 5.387: 99.5317% ( 1) 00:16:35.930 5.520 - 5.547: 99.5370% ( 1) 00:16:35.930 6.000 - 6.027: 99.5476% ( 2) 00:16:35.931 6.133 - 6.160: 99.5530% ( 1) 00:16:35.931 6.187 - 6.213: 99.5583% ( 1) 00:16:35.931 8.267 - 8.320: 99.5636% ( 1) 00:16:35.931 9.333 - 9.387: 99.5689% ( 1) 00:16:35.931 10.347 - 10.400: 99.5742% ( 1) 00:16:35.931 11.733 - 11.787: 99.5796% ( 1) 00:16:35.931 2048.000 - 2061.653: 99.5849% ( 1) 00:16:35.931 2402.987 - 2416.640: 99.5902% ( 1) 00:16:35.931 3031.040 - 3044.693: 99.5955% ( 1) 00:16:35.931 3986.773 - 4014.080: 99.9894% ( 74) 00:16:35.931 4969.813 - 4997.120: 99.9947% ( 1) 00:16:35.931 5980.160 - 6007.467: 100.0000% ( 1) 00:16:35.931 00:16:35.931 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:35.931 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:35.931 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:35.931 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:35.931 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:35.931 [ 00:16:35.931 { 00:16:35.931 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:35.931 "subtype": "Discovery", 00:16:35.931 "listen_addresses": [], 00:16:35.931 "allow_any_host": true, 00:16:35.931 "hosts": [] 00:16:35.931 }, 00:16:35.931 { 00:16:35.931 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:35.931 "subtype": "NVMe", 00:16:35.931 "listen_addresses": [ 00:16:35.931 { 00:16:35.931 "trtype": "VFIOUSER", 00:16:35.931 "adrfam": "IPv4", 00:16:35.931 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:35.931 "trsvcid": "0" 00:16:35.931 } 00:16:35.931 ], 00:16:35.931 "allow_any_host": true, 00:16:35.931 "hosts": [], 00:16:35.931 "serial_number": "SPDK1", 00:16:35.931 "model_number": "SPDK bdev Controller", 00:16:35.931 "max_namespaces": 32, 00:16:35.931 "min_cntlid": 1, 00:16:35.931 "max_cntlid": 65519, 00:16:35.931 "namespaces": [ 00:16:35.931 { 00:16:35.931 "nsid": 1, 00:16:35.931 "bdev_name": "Malloc1", 00:16:35.931 "name": "Malloc1", 00:16:35.931 "nguid": "9B10EBD7FD2544439783A4208F52E9FA", 00:16:35.931 "uuid": "9b10ebd7-fd25-4443-9783-a4208f52e9fa" 00:16:35.931 } 00:16:35.931 ] 00:16:35.931 }, 00:16:35.931 { 00:16:35.931 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:35.931 "subtype": "NVMe", 00:16:35.931 "listen_addresses": [ 00:16:35.931 { 00:16:35.931 "trtype": "VFIOUSER", 00:16:35.931 "adrfam": "IPv4", 00:16:35.931 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:35.931 "trsvcid": "0" 00:16:35.931 } 00:16:35.931 ], 00:16:35.931 "allow_any_host": true, 00:16:35.931 "hosts": [], 00:16:35.931 "serial_number": "SPDK2", 00:16:35.931 "model_number": "SPDK bdev Controller", 00:16:35.931 "max_namespaces": 32, 00:16:35.931 "min_cntlid": 1, 00:16:35.931 "max_cntlid": 65519, 00:16:35.931 "namespaces": [ 00:16:35.931 { 00:16:35.931 "nsid": 1, 00:16:35.931 "bdev_name": "Malloc2", 00:16:35.931 "name": "Malloc2", 00:16:35.931 "nguid": "9D2579ACBFC441CFA6EF2F5B8C3E43B6", 00:16:35.931 "uuid": "9d2579ac-bfc4-41cf-a6ef-2f5b8c3e43b6" 00:16:35.931 } 00:16:35.931 ] 00:16:35.931 } 00:16:35.931 ] 00:16:35.931 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:35.931 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2052407 00:16:35.931 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:35.931 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:35.931 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:35.931 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:35.931 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:35.931 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:35.931 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:35.931 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:36.192 Malloc3 00:16:36.192 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:36.192 [2024-12-05 21:09:37.508795] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:36.453 [2024-12-05 21:09:37.653790] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:36.453 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:36.453 Asynchronous Event Request test 00:16:36.453 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:36.453 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:36.453 Registering asynchronous event callbacks... 00:16:36.453 Starting namespace attribute notice tests for all controllers... 00:16:36.453 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:36.453 aer_cb - Changed Namespace 00:16:36.453 Cleaning up... 00:16:36.453 [ 00:16:36.453 { 00:16:36.453 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:36.453 "subtype": "Discovery", 00:16:36.453 "listen_addresses": [], 00:16:36.453 "allow_any_host": true, 00:16:36.453 "hosts": [] 00:16:36.453 }, 00:16:36.453 { 00:16:36.453 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:36.453 "subtype": "NVMe", 00:16:36.453 "listen_addresses": [ 00:16:36.453 { 00:16:36.453 "trtype": "VFIOUSER", 00:16:36.453 "adrfam": "IPv4", 00:16:36.453 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:36.453 "trsvcid": "0" 00:16:36.453 } 00:16:36.453 ], 00:16:36.453 "allow_any_host": true, 00:16:36.453 "hosts": [], 00:16:36.453 "serial_number": "SPDK1", 00:16:36.453 "model_number": "SPDK bdev Controller", 00:16:36.453 "max_namespaces": 32, 00:16:36.453 "min_cntlid": 1, 00:16:36.453 "max_cntlid": 65519, 00:16:36.453 "namespaces": [ 00:16:36.453 { 00:16:36.453 "nsid": 1, 00:16:36.453 "bdev_name": "Malloc1", 00:16:36.453 "name": "Malloc1", 00:16:36.453 "nguid": "9B10EBD7FD2544439783A4208F52E9FA", 00:16:36.453 "uuid": "9b10ebd7-fd25-4443-9783-a4208f52e9fa" 00:16:36.453 }, 00:16:36.453 { 00:16:36.453 "nsid": 2, 00:16:36.453 "bdev_name": "Malloc3", 00:16:36.453 "name": "Malloc3", 00:16:36.453 "nguid": "3C8FC2FEF09141408FAC633DC3D58196", 00:16:36.453 "uuid": "3c8fc2fe-f091-4140-8fac-633dc3d58196" 00:16:36.453 } 00:16:36.453 ] 00:16:36.453 }, 00:16:36.453 { 00:16:36.453 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:36.453 "subtype": "NVMe", 00:16:36.453 "listen_addresses": [ 00:16:36.453 { 00:16:36.453 "trtype": "VFIOUSER", 00:16:36.453 "adrfam": "IPv4", 00:16:36.453 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:36.453 "trsvcid": "0" 00:16:36.453 } 00:16:36.453 ], 00:16:36.453 "allow_any_host": true, 00:16:36.453 "hosts": [], 00:16:36.453 "serial_number": "SPDK2", 00:16:36.453 "model_number": "SPDK bdev Controller", 00:16:36.453 "max_namespaces": 32, 00:16:36.453 "min_cntlid": 1, 00:16:36.453 "max_cntlid": 65519, 00:16:36.453 "namespaces": [ 00:16:36.453 { 00:16:36.453 "nsid": 1, 00:16:36.453 "bdev_name": "Malloc2", 00:16:36.453 "name": "Malloc2", 00:16:36.453 "nguid": "9D2579ACBFC441CFA6EF2F5B8C3E43B6", 00:16:36.453 "uuid": "9d2579ac-bfc4-41cf-a6ef-2f5b8c3e43b6" 00:16:36.453 } 00:16:36.453 ] 00:16:36.453 } 00:16:36.453 ] 00:16:36.453 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2052407 00:16:36.453 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:36.453 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:36.453 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:36.453 21:09:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:36.716 [2024-12-05 21:09:37.890287] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:16:36.716 [2024-12-05 21:09:37.890329] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052599 ] 00:16:36.716 [2024-12-05 21:09:37.942691] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:36.716 [2024-12-05 21:09:37.955111] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:36.716 [2024-12-05 21:09:37.955136] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2326c0d000 00:16:36.716 [2024-12-05 21:09:37.956121] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:36.716 [2024-12-05 21:09:37.957122] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:36.716 [2024-12-05 21:09:37.958126] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:36.716 [2024-12-05 21:09:37.959127] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:36.716 [2024-12-05 21:09:37.960133] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:36.716 [2024-12-05 21:09:37.961137] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:36.716 [2024-12-05 21:09:37.962147] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:36.716 [2024-12-05 21:09:37.963150] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:36.716 [2024-12-05 21:09:37.964152] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:36.716 [2024-12-05 21:09:37.964163] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2326c02000 00:16:36.716 [2024-12-05 21:09:37.965487] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:36.716 [2024-12-05 21:09:37.981698] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:36.716 [2024-12-05 21:09:37.981723] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:36.716 [2024-12-05 21:09:37.983779] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:36.716 [2024-12-05 21:09:37.983822] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:36.716 [2024-12-05 21:09:37.983908] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:36.716 [2024-12-05 21:09:37.983920] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:36.716 [2024-12-05 21:09:37.983926] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:36.716 [2024-12-05 21:09:37.985869] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:36.716 [2024-12-05 21:09:37.985883] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:36.716 [2024-12-05 21:09:37.985891] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:36.716 [2024-12-05 21:09:37.986798] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:36.716 [2024-12-05 21:09:37.986808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:36.716 [2024-12-05 21:09:37.986816] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:36.716 [2024-12-05 21:09:37.987802] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:36.716 [2024-12-05 21:09:37.987811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:36.716 [2024-12-05 21:09:37.988818] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:36.716 [2024-12-05 21:09:37.988827] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:36.716 [2024-12-05 21:09:37.988832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:36.716 [2024-12-05 21:09:37.988839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:36.716 [2024-12-05 21:09:37.988948] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:36.716 [2024-12-05 21:09:37.988953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:36.716 [2024-12-05 21:09:37.988958] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:36.716 [2024-12-05 21:09:37.989827] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:36.716 [2024-12-05 21:09:37.990832] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:36.717 [2024-12-05 21:09:37.991840] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:36.717 [2024-12-05 21:09:37.992844] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:36.717 [2024-12-05 21:09:37.992889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:36.717 [2024-12-05 21:09:37.993856] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:36.717 [2024-12-05 21:09:37.993870] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:36.717 [2024-12-05 21:09:37.993875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:37.993897] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:36.717 [2024-12-05 21:09:37.993904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:37.993920] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:36.717 [2024-12-05 21:09:37.993925] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:36.717 [2024-12-05 21:09:37.993929] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:36.717 [2024-12-05 21:09:37.993941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:36.717 [2024-12-05 21:09:37.999869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:36.717 [2024-12-05 21:09:37.999882] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:36.717 [2024-12-05 21:09:37.999889] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:36.717 [2024-12-05 21:09:37.999894] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:36.717 [2024-12-05 21:09:37.999899] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:36.717 [2024-12-05 21:09:37.999904] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:36.717 [2024-12-05 21:09:37.999908] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:36.717 [2024-12-05 21:09:37.999913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:37.999921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:37.999931] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:36.717 [2024-12-05 21:09:38.007868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:36.717 [2024-12-05 21:09:38.007881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.717 [2024-12-05 21:09:38.007890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.717 [2024-12-05 21:09:38.007899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.717 [2024-12-05 21:09:38.007907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:36.717 [2024-12-05 21:09:38.007912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.007921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.007930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:36.717 [2024-12-05 21:09:38.015869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:36.717 [2024-12-05 21:09:38.015878] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:36.717 [2024-12-05 21:09:38.015883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.015890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.015898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.015907] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:36.717 [2024-12-05 21:09:38.023869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:36.717 [2024-12-05 21:09:38.023936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.023944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.023952] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:36.717 [2024-12-05 21:09:38.023956] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:36.717 [2024-12-05 21:09:38.023960] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:36.717 [2024-12-05 21:09:38.023966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:36.717 [2024-12-05 21:09:38.031873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:36.717 [2024-12-05 21:09:38.031889] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:36.717 [2024-12-05 21:09:38.031901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.031909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.031916] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:36.717 [2024-12-05 21:09:38.031921] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:36.717 [2024-12-05 21:09:38.031924] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:36.717 [2024-12-05 21:09:38.031930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:36.717 [2024-12-05 21:09:38.039871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:36.717 [2024-12-05 21:09:38.039885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.039894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.039901] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:36.717 [2024-12-05 21:09:38.039906] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:36.717 [2024-12-05 21:09:38.039909] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:36.717 [2024-12-05 21:09:38.039915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:36.717 [2024-12-05 21:09:38.047868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:36.717 [2024-12-05 21:09:38.047879] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.047888] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.047896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.047904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.047909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.047914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.047920] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:36.717 [2024-12-05 21:09:38.047925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:36.717 [2024-12-05 21:09:38.047931] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:36.717 [2024-12-05 21:09:38.047947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:36.717 [2024-12-05 21:09:38.055870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:36.717 [2024-12-05 21:09:38.055884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:36.717 [2024-12-05 21:09:38.063868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:36.717 [2024-12-05 21:09:38.063882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:36.717 [2024-12-05 21:09:38.071869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:36.717 [2024-12-05 21:09:38.071884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:36.717 [2024-12-05 21:09:38.079867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:36.717 [2024-12-05 21:09:38.079884] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:36.717 [2024-12-05 21:09:38.079889] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:36.717 [2024-12-05 21:09:38.079893] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:36.717 [2024-12-05 21:09:38.079896] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:36.717 [2024-12-05 21:09:38.079900] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:36.718 [2024-12-05 21:09:38.079906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:36.718 [2024-12-05 21:09:38.079914] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:36.718 [2024-12-05 21:09:38.079919] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:36.718 [2024-12-05 21:09:38.079922] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:36.718 [2024-12-05 21:09:38.079928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:36.718 [2024-12-05 21:09:38.079937] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:36.718 [2024-12-05 21:09:38.079942] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:36.718 [2024-12-05 21:09:38.079946] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:36.718 [2024-12-05 21:09:38.079952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:36.718 [2024-12-05 21:09:38.079959] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:36.718 [2024-12-05 21:09:38.079964] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:36.718 [2024-12-05 21:09:38.079967] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:36.718 [2024-12-05 21:09:38.079973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:36.718 [2024-12-05 21:09:38.087869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:36.718 [2024-12-05 21:09:38.087885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:36.718 [2024-12-05 21:09:38.087895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:36.718 [2024-12-05 21:09:38.087903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:36.718 ===================================================== 00:16:36.718 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:36.718 ===================================================== 00:16:36.718 Controller Capabilities/Features 00:16:36.718 ================================ 00:16:36.718 Vendor ID: 4e58 00:16:36.718 Subsystem Vendor ID: 4e58 00:16:36.718 Serial Number: SPDK2 00:16:36.718 Model Number: SPDK bdev Controller 00:16:36.718 Firmware Version: 25.01 00:16:36.718 Recommended Arb Burst: 6 00:16:36.718 IEEE OUI Identifier: 8d 6b 50 00:16:36.718 Multi-path I/O 00:16:36.718 May have multiple subsystem ports: Yes 00:16:36.718 May have multiple controllers: Yes 00:16:36.718 Associated with SR-IOV VF: No 00:16:36.718 Max Data Transfer Size: 131072 00:16:36.718 Max Number of Namespaces: 32 00:16:36.718 Max Number of I/O Queues: 127 00:16:36.718 NVMe Specification Version (VS): 1.3 00:16:36.718 NVMe Specification Version (Identify): 1.3 00:16:36.718 Maximum Queue Entries: 256 00:16:36.718 Contiguous Queues Required: Yes 00:16:36.718 Arbitration Mechanisms Supported 00:16:36.718 Weighted Round Robin: Not Supported 00:16:36.718 Vendor Specific: Not Supported 00:16:36.718 Reset Timeout: 15000 ms 00:16:36.718 Doorbell Stride: 4 bytes 00:16:36.718 NVM Subsystem Reset: Not Supported 00:16:36.718 Command Sets Supported 00:16:36.718 NVM Command Set: Supported 00:16:36.718 Boot Partition: Not Supported 00:16:36.718 Memory Page Size Minimum: 4096 bytes 00:16:36.718 Memory Page Size Maximum: 4096 bytes 00:16:36.718 Persistent Memory Region: Not Supported 00:16:36.718 Optional Asynchronous Events Supported 00:16:36.718 Namespace Attribute Notices: Supported 00:16:36.718 Firmware Activation Notices: Not Supported 00:16:36.718 ANA Change Notices: Not Supported 00:16:36.718 PLE Aggregate Log Change Notices: Not Supported 00:16:36.718 LBA Status Info Alert Notices: Not Supported 00:16:36.718 EGE Aggregate Log Change Notices: Not Supported 00:16:36.718 Normal NVM Subsystem Shutdown event: Not Supported 00:16:36.718 Zone Descriptor Change Notices: Not Supported 00:16:36.718 Discovery Log Change Notices: Not Supported 00:16:36.718 Controller Attributes 00:16:36.718 128-bit Host Identifier: Supported 00:16:36.718 Non-Operational Permissive Mode: Not Supported 00:16:36.718 NVM Sets: Not Supported 00:16:36.718 Read Recovery Levels: Not Supported 00:16:36.718 Endurance Groups: Not Supported 00:16:36.718 Predictable Latency Mode: Not Supported 00:16:36.718 Traffic Based Keep ALive: Not Supported 00:16:36.718 Namespace Granularity: Not Supported 00:16:36.718 SQ Associations: Not Supported 00:16:36.718 UUID List: Not Supported 00:16:36.718 Multi-Domain Subsystem: Not Supported 00:16:36.718 Fixed Capacity Management: Not Supported 00:16:36.718 Variable Capacity Management: Not Supported 00:16:36.718 Delete Endurance Group: Not Supported 00:16:36.718 Delete NVM Set: Not Supported 00:16:36.718 Extended LBA Formats Supported: Not Supported 00:16:36.718 Flexible Data Placement Supported: Not Supported 00:16:36.718 00:16:36.718 Controller Memory Buffer Support 00:16:36.718 ================================ 00:16:36.718 Supported: No 00:16:36.718 00:16:36.718 Persistent Memory Region Support 00:16:36.718 ================================ 00:16:36.718 Supported: No 00:16:36.718 00:16:36.718 Admin Command Set Attributes 00:16:36.718 ============================ 00:16:36.718 Security Send/Receive: Not Supported 00:16:36.718 Format NVM: Not Supported 00:16:36.718 Firmware Activate/Download: Not Supported 00:16:36.718 Namespace Management: Not Supported 00:16:36.718 Device Self-Test: Not Supported 00:16:36.718 Directives: Not Supported 00:16:36.718 NVMe-MI: Not Supported 00:16:36.718 Virtualization Management: Not Supported 00:16:36.718 Doorbell Buffer Config: Not Supported 00:16:36.718 Get LBA Status Capability: Not Supported 00:16:36.718 Command & Feature Lockdown Capability: Not Supported 00:16:36.718 Abort Command Limit: 4 00:16:36.718 Async Event Request Limit: 4 00:16:36.718 Number of Firmware Slots: N/A 00:16:36.718 Firmware Slot 1 Read-Only: N/A 00:16:36.718 Firmware Activation Without Reset: N/A 00:16:36.718 Multiple Update Detection Support: N/A 00:16:36.718 Firmware Update Granularity: No Information Provided 00:16:36.718 Per-Namespace SMART Log: No 00:16:36.718 Asymmetric Namespace Access Log Page: Not Supported 00:16:36.718 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:36.718 Command Effects Log Page: Supported 00:16:36.718 Get Log Page Extended Data: Supported 00:16:36.718 Telemetry Log Pages: Not Supported 00:16:36.718 Persistent Event Log Pages: Not Supported 00:16:36.718 Supported Log Pages Log Page: May Support 00:16:36.718 Commands Supported & Effects Log Page: Not Supported 00:16:36.718 Feature Identifiers & Effects Log Page:May Support 00:16:36.718 NVMe-MI Commands & Effects Log Page: May Support 00:16:36.718 Data Area 4 for Telemetry Log: Not Supported 00:16:36.718 Error Log Page Entries Supported: 128 00:16:36.718 Keep Alive: Supported 00:16:36.718 Keep Alive Granularity: 10000 ms 00:16:36.718 00:16:36.718 NVM Command Set Attributes 00:16:36.718 ========================== 00:16:36.718 Submission Queue Entry Size 00:16:36.718 Max: 64 00:16:36.718 Min: 64 00:16:36.718 Completion Queue Entry Size 00:16:36.718 Max: 16 00:16:36.718 Min: 16 00:16:36.718 Number of Namespaces: 32 00:16:36.718 Compare Command: Supported 00:16:36.718 Write Uncorrectable Command: Not Supported 00:16:36.718 Dataset Management Command: Supported 00:16:36.718 Write Zeroes Command: Supported 00:16:36.718 Set Features Save Field: Not Supported 00:16:36.718 Reservations: Not Supported 00:16:36.718 Timestamp: Not Supported 00:16:36.718 Copy: Supported 00:16:36.718 Volatile Write Cache: Present 00:16:36.718 Atomic Write Unit (Normal): 1 00:16:36.718 Atomic Write Unit (PFail): 1 00:16:36.718 Atomic Compare & Write Unit: 1 00:16:36.718 Fused Compare & Write: Supported 00:16:36.718 Scatter-Gather List 00:16:36.718 SGL Command Set: Supported (Dword aligned) 00:16:36.718 SGL Keyed: Not Supported 00:16:36.718 SGL Bit Bucket Descriptor: Not Supported 00:16:36.718 SGL Metadata Pointer: Not Supported 00:16:36.718 Oversized SGL: Not Supported 00:16:36.718 SGL Metadata Address: Not Supported 00:16:36.718 SGL Offset: Not Supported 00:16:36.718 Transport SGL Data Block: Not Supported 00:16:36.718 Replay Protected Memory Block: Not Supported 00:16:36.718 00:16:36.718 Firmware Slot Information 00:16:36.718 ========================= 00:16:36.718 Active slot: 1 00:16:36.718 Slot 1 Firmware Revision: 25.01 00:16:36.718 00:16:36.718 00:16:36.718 Commands Supported and Effects 00:16:36.718 ============================== 00:16:36.718 Admin Commands 00:16:36.718 -------------- 00:16:36.718 Get Log Page (02h): Supported 00:16:36.718 Identify (06h): Supported 00:16:36.718 Abort (08h): Supported 00:16:36.718 Set Features (09h): Supported 00:16:36.718 Get Features (0Ah): Supported 00:16:36.718 Asynchronous Event Request (0Ch): Supported 00:16:36.718 Keep Alive (18h): Supported 00:16:36.718 I/O Commands 00:16:36.718 ------------ 00:16:36.718 Flush (00h): Supported LBA-Change 00:16:36.719 Write (01h): Supported LBA-Change 00:16:36.719 Read (02h): Supported 00:16:36.719 Compare (05h): Supported 00:16:36.719 Write Zeroes (08h): Supported LBA-Change 00:16:36.719 Dataset Management (09h): Supported LBA-Change 00:16:36.719 Copy (19h): Supported LBA-Change 00:16:36.719 00:16:36.719 Error Log 00:16:36.719 ========= 00:16:36.719 00:16:36.719 Arbitration 00:16:36.719 =========== 00:16:36.719 Arbitration Burst: 1 00:16:36.719 00:16:36.719 Power Management 00:16:36.719 ================ 00:16:36.719 Number of Power States: 1 00:16:36.719 Current Power State: Power State #0 00:16:36.719 Power State #0: 00:16:36.719 Max Power: 0.00 W 00:16:36.719 Non-Operational State: Operational 00:16:36.719 Entry Latency: Not Reported 00:16:36.719 Exit Latency: Not Reported 00:16:36.719 Relative Read Throughput: 0 00:16:36.719 Relative Read Latency: 0 00:16:36.719 Relative Write Throughput: 0 00:16:36.719 Relative Write Latency: 0 00:16:36.719 Idle Power: Not Reported 00:16:36.719 Active Power: Not Reported 00:16:36.719 Non-Operational Permissive Mode: Not Supported 00:16:36.719 00:16:36.719 Health Information 00:16:36.719 ================== 00:16:36.719 Critical Warnings: 00:16:36.719 Available Spare Space: OK 00:16:36.719 Temperature: OK 00:16:36.719 Device Reliability: OK 00:16:36.719 Read Only: No 00:16:36.719 Volatile Memory Backup: OK 00:16:36.719 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:36.719 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:36.719 Available Spare: 0% 00:16:36.719 Available Sp[2024-12-05 21:09:38.088001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:36.719 [2024-12-05 21:09:38.095870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:36.719 [2024-12-05 21:09:38.095903] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:36.719 [2024-12-05 21:09:38.095912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.719 [2024-12-05 21:09:38.095919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.719 [2024-12-05 21:09:38.095925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.719 [2024-12-05 21:09:38.095932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.719 [2024-12-05 21:09:38.095971] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:36.719 [2024-12-05 21:09:38.095981] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:36.719 [2024-12-05 21:09:38.096973] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:36.719 [2024-12-05 21:09:38.097022] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:36.719 [2024-12-05 21:09:38.097029] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:36.719 [2024-12-05 21:09:38.097976] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:36.719 [2024-12-05 21:09:38.097989] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:36.719 [2024-12-05 21:09:38.098036] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:36.719 [2024-12-05 21:09:38.100869] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:36.719 are Threshold: 0% 00:16:36.719 Life Percentage Used: 0% 00:16:36.719 Data Units Read: 0 00:16:36.719 Data Units Written: 0 00:16:36.719 Host Read Commands: 0 00:16:36.719 Host Write Commands: 0 00:16:36.719 Controller Busy Time: 0 minutes 00:16:36.719 Power Cycles: 0 00:16:36.719 Power On Hours: 0 hours 00:16:36.719 Unsafe Shutdowns: 0 00:16:36.719 Unrecoverable Media Errors: 0 00:16:36.719 Lifetime Error Log Entries: 0 00:16:36.719 Warning Temperature Time: 0 minutes 00:16:36.719 Critical Temperature Time: 0 minutes 00:16:36.719 00:16:36.719 Number of Queues 00:16:36.719 ================ 00:16:36.719 Number of I/O Submission Queues: 127 00:16:36.719 Number of I/O Completion Queues: 127 00:16:36.719 00:16:36.719 Active Namespaces 00:16:36.719 ================= 00:16:36.719 Namespace ID:1 00:16:36.719 Error Recovery Timeout: Unlimited 00:16:36.719 Command Set Identifier: NVM (00h) 00:16:36.719 Deallocate: Supported 00:16:36.719 Deallocated/Unwritten Error: Not Supported 00:16:36.719 Deallocated Read Value: Unknown 00:16:36.719 Deallocate in Write Zeroes: Not Supported 00:16:36.719 Deallocated Guard Field: 0xFFFF 00:16:36.719 Flush: Supported 00:16:36.719 Reservation: Supported 00:16:36.719 Namespace Sharing Capabilities: Multiple Controllers 00:16:36.719 Size (in LBAs): 131072 (0GiB) 00:16:36.719 Capacity (in LBAs): 131072 (0GiB) 00:16:36.719 Utilization (in LBAs): 131072 (0GiB) 00:16:36.719 NGUID: 9D2579ACBFC441CFA6EF2F5B8C3E43B6 00:16:36.719 UUID: 9d2579ac-bfc4-41cf-a6ef-2f5b8c3e43b6 00:16:36.719 Thin Provisioning: Not Supported 00:16:36.719 Per-NS Atomic Units: Yes 00:16:36.719 Atomic Boundary Size (Normal): 0 00:16:36.719 Atomic Boundary Size (PFail): 0 00:16:36.719 Atomic Boundary Offset: 0 00:16:36.719 Maximum Single Source Range Length: 65535 00:16:36.719 Maximum Copy Length: 65535 00:16:36.719 Maximum Source Range Count: 1 00:16:36.719 NGUID/EUI64 Never Reused: No 00:16:36.719 Namespace Write Protected: No 00:16:36.719 Number of LBA Formats: 1 00:16:36.719 Current LBA Format: LBA Format #00 00:16:36.719 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:36.719 00:16:36.719 21:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:36.979 [2024-12-05 21:09:38.305254] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:42.258 Initializing NVMe Controllers 00:16:42.258 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:42.258 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:42.258 Initialization complete. Launching workers. 00:16:42.258 ======================================================== 00:16:42.258 Latency(us) 00:16:42.258 Device Information : IOPS MiB/s Average min max 00:16:42.258 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39988.04 156.20 3200.84 870.59 6818.15 00:16:42.258 ======================================================== 00:16:42.258 Total : 39988.04 156.20 3200.84 870.59 6818.15 00:16:42.258 00:16:42.259 [2024-12-05 21:09:43.410063] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:42.259 21:09:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:42.259 [2024-12-05 21:09:43.599633] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:47.539 Initializing NVMe Controllers 00:16:47.539 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:47.539 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:47.539 Initialization complete. Launching workers. 00:16:47.539 ======================================================== 00:16:47.539 Latency(us) 00:16:47.539 Device Information : IOPS MiB/s Average min max 00:16:47.539 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34154.79 133.42 3747.41 1124.11 7346.44 00:16:47.539 ======================================================== 00:16:47.539 Total : 34154.79 133.42 3747.41 1124.11 7346.44 00:16:47.539 00:16:47.539 [2024-12-05 21:09:48.618007] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:47.539 21:09:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:47.539 [2024-12-05 21:09:48.830210] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:52.822 [2024-12-05 21:09:53.971119] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:52.822 Initializing NVMe Controllers 00:16:52.822 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:52.822 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:52.822 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:52.822 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:52.822 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:52.822 Initialization complete. Launching workers. 00:16:52.822 Starting thread on core 2 00:16:52.822 Starting thread on core 3 00:16:52.822 Starting thread on core 1 00:16:52.822 21:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:53.083 [2024-12-05 21:09:54.261900] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:56.383 [2024-12-05 21:09:57.348669] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:56.383 Initializing NVMe Controllers 00:16:56.383 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:56.383 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:56.383 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:56.383 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:56.383 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:56.383 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:56.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:56.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:56.383 Initialization complete. Launching workers. 00:16:56.383 Starting thread on core 1 with urgent priority queue 00:16:56.383 Starting thread on core 2 with urgent priority queue 00:16:56.383 Starting thread on core 3 with urgent priority queue 00:16:56.383 Starting thread on core 0 with urgent priority queue 00:16:56.383 SPDK bdev Controller (SPDK2 ) core 0: 8418.33 IO/s 11.88 secs/100000 ios 00:16:56.383 SPDK bdev Controller (SPDK2 ) core 1: 8101.00 IO/s 12.34 secs/100000 ios 00:16:56.383 SPDK bdev Controller (SPDK2 ) core 2: 8044.33 IO/s 12.43 secs/100000 ios 00:16:56.383 SPDK bdev Controller (SPDK2 ) core 3: 12665.67 IO/s 7.90 secs/100000 ios 00:16:56.383 ======================================================== 00:16:56.383 00:16:56.383 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:56.383 [2024-12-05 21:09:57.651302] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:56.383 Initializing NVMe Controllers 00:16:56.383 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:56.383 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:56.383 Namespace ID: 1 size: 0GB 00:16:56.383 Initialization complete. 00:16:56.383 INFO: using host memory buffer for IO 00:16:56.383 Hello world! 00:16:56.383 [2024-12-05 21:09:57.661363] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:56.383 21:09:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:56.643 [2024-12-05 21:09:57.958830] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:58.029 Initializing NVMe Controllers 00:16:58.029 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:58.029 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:58.029 Initialization complete. Launching workers. 00:16:58.029 submit (in ns) avg, min, max = 8308.6, 3898.3, 4000600.8 00:16:58.029 complete (in ns) avg, min, max = 17785.0, 2390.0, 3999085.0 00:16:58.029 00:16:58.029 Submit histogram 00:16:58.029 ================ 00:16:58.029 Range in us Cumulative Count 00:16:58.029 3.893 - 3.920: 0.4938% ( 93) 00:16:58.029 3.920 - 3.947: 4.1253% ( 684) 00:16:58.029 3.947 - 3.973: 11.4521% ( 1380) 00:16:58.029 3.973 - 4.000: 22.9838% ( 2172) 00:16:58.029 4.000 - 4.027: 35.1102% ( 2284) 00:16:58.029 4.027 - 4.053: 47.3215% ( 2300) 00:16:58.029 4.053 - 4.080: 62.8723% ( 2929) 00:16:58.029 4.080 - 4.107: 78.1152% ( 2871) 00:16:58.029 4.107 - 4.133: 89.2010% ( 2088) 00:16:58.029 4.133 - 4.160: 95.4553% ( 1178) 00:16:58.029 4.160 - 4.187: 97.9081% ( 462) 00:16:58.029 4.187 - 4.213: 98.9753% ( 201) 00:16:58.029 4.213 - 4.240: 99.2992% ( 61) 00:16:58.029 4.240 - 4.267: 99.4319% ( 25) 00:16:58.029 4.267 - 4.293: 99.4638% ( 6) 00:16:58.029 4.293 - 4.320: 99.4691% ( 1) 00:16:58.029 4.507 - 4.533: 99.4744% ( 1) 00:16:58.029 4.800 - 4.827: 99.4797% ( 1) 00:16:58.029 4.933 - 4.960: 99.4850% ( 1) 00:16:58.029 4.960 - 4.987: 99.4903% ( 1) 00:16:58.029 5.013 - 5.040: 99.4956% ( 1) 00:16:58.029 5.093 - 5.120: 99.5009% ( 1) 00:16:58.029 5.120 - 5.147: 99.5062% ( 1) 00:16:58.029 5.653 - 5.680: 99.5115% ( 1) 00:16:58.029 5.680 - 5.707: 99.5169% ( 1) 00:16:58.029 5.813 - 5.840: 99.5275% ( 2) 00:16:58.029 5.867 - 5.893: 99.5381% ( 2) 00:16:58.029 6.027 - 6.053: 99.5434% ( 1) 00:16:58.029 6.107 - 6.133: 99.5593% ( 3) 00:16:58.029 6.133 - 6.160: 99.5646% ( 1) 00:16:58.029 6.187 - 6.213: 99.5699% ( 1) 00:16:58.029 6.213 - 6.240: 99.5753% ( 1) 00:16:58.029 6.240 - 6.267: 99.5806% ( 1) 00:16:58.029 6.347 - 6.373: 99.5859% ( 1) 00:16:58.029 6.373 - 6.400: 99.5965% ( 2) 00:16:58.029 6.427 - 6.453: 99.6071% ( 2) 00:16:58.029 6.453 - 6.480: 99.6124% ( 1) 00:16:58.029 6.507 - 6.533: 99.6230% ( 2) 00:16:58.029 6.533 - 6.560: 99.6284% ( 1) 00:16:58.029 6.560 - 6.587: 99.6337% ( 1) 00:16:58.029 6.587 - 6.613: 99.6390% ( 1) 00:16:58.029 6.613 - 6.640: 99.6443% ( 1) 00:16:58.029 6.640 - 6.667: 99.6496% ( 1) 00:16:58.029 6.827 - 6.880: 99.6655% ( 3) 00:16:58.029 6.880 - 6.933: 99.6814% ( 3) 00:16:58.029 6.933 - 6.987: 99.6921% ( 2) 00:16:58.029 6.987 - 7.040: 99.7027% ( 2) 00:16:58.029 7.093 - 7.147: 99.7186% ( 3) 00:16:58.029 7.200 - 7.253: 99.7292% ( 2) 00:16:58.029 7.253 - 7.307: 99.7452% ( 3) 00:16:58.029 7.307 - 7.360: 99.7611% ( 3) 00:16:58.029 7.360 - 7.413: 99.7664% ( 1) 00:16:58.029 7.413 - 7.467: 99.7770% ( 2) 00:16:58.029 7.467 - 7.520: 99.7823% ( 1) 00:16:58.029 7.520 - 7.573: 99.7929% ( 2) 00:16:58.029 7.573 - 7.627: 99.8089% ( 3) 00:16:58.029 7.627 - 7.680: 99.8142% ( 1) 00:16:58.029 7.680 - 7.733: 99.8195% ( 1) 00:16:58.029 7.733 - 7.787: 99.8248% ( 1) 00:16:58.029 7.787 - 7.840: 99.8354% ( 2) 00:16:58.029 8.000 - 8.053: 99.8460% ( 2) 00:16:58.029 8.053 - 8.107: 99.8513% ( 1) 00:16:58.029 8.267 - 8.320: 99.8620% ( 2) 00:16:58.029 8.800 - 8.853: 99.8673% ( 1) 00:16:58.029 8.853 - 8.907: 99.8726% ( 1) 00:16:58.029 9.067 - 9.120: 99.8832% ( 2) 00:16:58.029 9.120 - 9.173: 99.8885% ( 1) 00:16:58.029 14.293 - 14.400: 99.8938% ( 1) 00:16:58.029 3986.773 - 4014.080: 100.0000% ( 20) 00:16:58.029 00:16:58.029 Complete histogram 00:16:58.029 ================== 00:16:58.029 Range in us Cumulative Count 00:16:58.029 2.387 - 2.400: 1.7361% ( 327) 00:16:58.029 2.400 - 2.413: 2.2458% ( 96) 00:16:58.029 2.413 - 2.427: 2.7608% ( 97) 00:16:58.029 2.427 - 2.440: 2.9466% ( 35) 00:16:58.029 2.440 - 2.453: 3.0847% ( 26) 00:16:58.029 2.453 - 2.467: 29.1532% ( 4910) 00:16:58.029 2.467 - 2.480: 51.3087% ( 4173) 00:16:58.029 2.480 - 2.493: 64.2527% ( 2438) 00:16:58.029 2.493 - [2024-12-05 21:09:59.055571] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:58.029 2.507: 75.0730% ( 2038) 00:16:58.029 2.507 - 2.520: 79.3417% ( 804) 00:16:58.029 2.520 - 2.533: 81.7680% ( 457) 00:16:58.029 2.533 - 2.547: 86.7905% ( 946) 00:16:58.029 2.547 - 2.560: 92.3016% ( 1038) 00:16:58.029 2.560 - 2.573: 95.4340% ( 590) 00:16:58.029 2.573 - 2.587: 97.8922% ( 463) 00:16:58.030 2.587 - 2.600: 98.9222% ( 194) 00:16:58.030 2.600 - 2.613: 99.2992% ( 71) 00:16:58.030 2.613 - 2.627: 99.3894% ( 17) 00:16:58.030 2.627 - 2.640: 99.3947% ( 1) 00:16:58.030 4.533 - 4.560: 99.4001% ( 1) 00:16:58.030 4.907 - 4.933: 99.4054% ( 1) 00:16:58.030 5.013 - 5.040: 99.4107% ( 1) 00:16:58.030 5.120 - 5.147: 99.4160% ( 1) 00:16:58.030 5.253 - 5.280: 99.4319% ( 3) 00:16:58.030 5.280 - 5.307: 99.4425% ( 2) 00:16:58.030 5.413 - 5.440: 99.4478% ( 1) 00:16:58.030 5.440 - 5.467: 99.4638% ( 3) 00:16:58.030 5.520 - 5.547: 99.4744% ( 2) 00:16:58.030 5.547 - 5.573: 99.4850% ( 2) 00:16:58.030 5.627 - 5.653: 99.4903% ( 1) 00:16:58.030 5.760 - 5.787: 99.4956% ( 1) 00:16:58.030 5.787 - 5.813: 99.5009% ( 1) 00:16:58.030 5.813 - 5.840: 99.5062% ( 1) 00:16:58.030 5.867 - 5.893: 99.5115% ( 1) 00:16:58.030 5.893 - 5.920: 99.5222% ( 2) 00:16:58.030 5.947 - 5.973: 99.5275% ( 1) 00:16:58.030 6.000 - 6.027: 99.5381% ( 2) 00:16:58.030 6.053 - 6.080: 99.5434% ( 1) 00:16:58.030 6.080 - 6.107: 99.5487% ( 1) 00:16:58.030 6.160 - 6.187: 99.5540% ( 1) 00:16:58.030 6.213 - 6.240: 99.5593% ( 1) 00:16:58.030 6.587 - 6.613: 99.5646% ( 1) 00:16:58.030 7.040 - 7.093: 99.5699% ( 1) 00:16:58.030 7.147 - 7.200: 99.5753% ( 1) 00:16:58.030 10.613 - 10.667: 99.5806% ( 1) 00:16:58.030 11.200 - 11.253: 99.5859% ( 1) 00:16:58.030 11.307 - 11.360: 99.5912% ( 1) 00:16:58.030 11.360 - 11.413: 99.5965% ( 1) 00:16:58.030 40.960 - 41.173: 99.6018% ( 1) 00:16:58.030 43.093 - 43.307: 99.6071% ( 1) 00:16:58.030 55.893 - 56.320: 99.6124% ( 1) 00:16:58.030 154.453 - 155.307: 99.6177% ( 1) 00:16:58.030 3986.773 - 4014.080: 100.0000% ( 72) 00:16:58.030 00:16:58.030 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:58.030 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:58.030 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:58.030 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:58.030 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:58.030 [ 00:16:58.030 { 00:16:58.030 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:58.030 "subtype": "Discovery", 00:16:58.030 "listen_addresses": [], 00:16:58.030 "allow_any_host": true, 00:16:58.030 "hosts": [] 00:16:58.030 }, 00:16:58.030 { 00:16:58.030 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:58.030 "subtype": "NVMe", 00:16:58.030 "listen_addresses": [ 00:16:58.030 { 00:16:58.030 "trtype": "VFIOUSER", 00:16:58.030 "adrfam": "IPv4", 00:16:58.030 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:58.030 "trsvcid": "0" 00:16:58.030 } 00:16:58.030 ], 00:16:58.030 "allow_any_host": true, 00:16:58.030 "hosts": [], 00:16:58.030 "serial_number": "SPDK1", 00:16:58.030 "model_number": "SPDK bdev Controller", 00:16:58.030 "max_namespaces": 32, 00:16:58.030 "min_cntlid": 1, 00:16:58.030 "max_cntlid": 65519, 00:16:58.030 "namespaces": [ 00:16:58.030 { 00:16:58.030 "nsid": 1, 00:16:58.030 "bdev_name": "Malloc1", 00:16:58.030 "name": "Malloc1", 00:16:58.030 "nguid": "9B10EBD7FD2544439783A4208F52E9FA", 00:16:58.030 "uuid": "9b10ebd7-fd25-4443-9783-a4208f52e9fa" 00:16:58.030 }, 00:16:58.030 { 00:16:58.030 "nsid": 2, 00:16:58.030 "bdev_name": "Malloc3", 00:16:58.030 "name": "Malloc3", 00:16:58.030 "nguid": "3C8FC2FEF09141408FAC633DC3D58196", 00:16:58.030 "uuid": "3c8fc2fe-f091-4140-8fac-633dc3d58196" 00:16:58.030 } 00:16:58.030 ] 00:16:58.030 }, 00:16:58.030 { 00:16:58.030 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:58.030 "subtype": "NVMe", 00:16:58.030 "listen_addresses": [ 00:16:58.030 { 00:16:58.030 "trtype": "VFIOUSER", 00:16:58.030 "adrfam": "IPv4", 00:16:58.030 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:58.030 "trsvcid": "0" 00:16:58.030 } 00:16:58.030 ], 00:16:58.030 "allow_any_host": true, 00:16:58.030 "hosts": [], 00:16:58.030 "serial_number": "SPDK2", 00:16:58.030 "model_number": "SPDK bdev Controller", 00:16:58.030 "max_namespaces": 32, 00:16:58.030 "min_cntlid": 1, 00:16:58.030 "max_cntlid": 65519, 00:16:58.030 "namespaces": [ 00:16:58.030 { 00:16:58.030 "nsid": 1, 00:16:58.030 "bdev_name": "Malloc2", 00:16:58.030 "name": "Malloc2", 00:16:58.030 "nguid": "9D2579ACBFC441CFA6EF2F5B8C3E43B6", 00:16:58.030 "uuid": "9d2579ac-bfc4-41cf-a6ef-2f5b8c3e43b6" 00:16:58.030 } 00:16:58.030 ] 00:16:58.030 } 00:16:58.030 ] 00:16:58.030 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:58.030 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:58.030 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2056776 00:16:58.030 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:58.030 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:58.030 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:58.030 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:58.030 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:58.030 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:58.030 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:58.292 Malloc4 00:16:58.292 [2024-12-05 21:09:59.486508] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:58.292 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:58.292 [2024-12-05 21:09:59.649636] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:58.292 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:58.292 Asynchronous Event Request test 00:16:58.292 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:58.292 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:58.292 Registering asynchronous event callbacks... 00:16:58.292 Starting namespace attribute notice tests for all controllers... 00:16:58.292 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:58.292 aer_cb - Changed Namespace 00:16:58.292 Cleaning up... 00:16:58.554 [ 00:16:58.554 { 00:16:58.554 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:58.554 "subtype": "Discovery", 00:16:58.554 "listen_addresses": [], 00:16:58.554 "allow_any_host": true, 00:16:58.554 "hosts": [] 00:16:58.554 }, 00:16:58.554 { 00:16:58.554 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:58.554 "subtype": "NVMe", 00:16:58.554 "listen_addresses": [ 00:16:58.554 { 00:16:58.554 "trtype": "VFIOUSER", 00:16:58.554 "adrfam": "IPv4", 00:16:58.554 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:58.554 "trsvcid": "0" 00:16:58.554 } 00:16:58.554 ], 00:16:58.554 "allow_any_host": true, 00:16:58.554 "hosts": [], 00:16:58.554 "serial_number": "SPDK1", 00:16:58.554 "model_number": "SPDK bdev Controller", 00:16:58.554 "max_namespaces": 32, 00:16:58.554 "min_cntlid": 1, 00:16:58.554 "max_cntlid": 65519, 00:16:58.554 "namespaces": [ 00:16:58.554 { 00:16:58.554 "nsid": 1, 00:16:58.554 "bdev_name": "Malloc1", 00:16:58.554 "name": "Malloc1", 00:16:58.554 "nguid": "9B10EBD7FD2544439783A4208F52E9FA", 00:16:58.554 "uuid": "9b10ebd7-fd25-4443-9783-a4208f52e9fa" 00:16:58.554 }, 00:16:58.554 { 00:16:58.554 "nsid": 2, 00:16:58.554 "bdev_name": "Malloc3", 00:16:58.554 "name": "Malloc3", 00:16:58.554 "nguid": "3C8FC2FEF09141408FAC633DC3D58196", 00:16:58.554 "uuid": "3c8fc2fe-f091-4140-8fac-633dc3d58196" 00:16:58.554 } 00:16:58.554 ] 00:16:58.554 }, 00:16:58.554 { 00:16:58.554 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:58.554 "subtype": "NVMe", 00:16:58.554 "listen_addresses": [ 00:16:58.554 { 00:16:58.554 "trtype": "VFIOUSER", 00:16:58.554 "adrfam": "IPv4", 00:16:58.554 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:58.554 "trsvcid": "0" 00:16:58.554 } 00:16:58.554 ], 00:16:58.554 "allow_any_host": true, 00:16:58.554 "hosts": [], 00:16:58.554 "serial_number": "SPDK2", 00:16:58.554 "model_number": "SPDK bdev Controller", 00:16:58.554 "max_namespaces": 32, 00:16:58.554 "min_cntlid": 1, 00:16:58.554 "max_cntlid": 65519, 00:16:58.554 "namespaces": [ 00:16:58.554 { 00:16:58.554 "nsid": 1, 00:16:58.554 "bdev_name": "Malloc2", 00:16:58.554 "name": "Malloc2", 00:16:58.554 "nguid": "9D2579ACBFC441CFA6EF2F5B8C3E43B6", 00:16:58.554 "uuid": "9d2579ac-bfc4-41cf-a6ef-2f5b8c3e43b6" 00:16:58.554 }, 00:16:58.554 { 00:16:58.554 "nsid": 2, 00:16:58.554 "bdev_name": "Malloc4", 00:16:58.554 "name": "Malloc4", 00:16:58.554 "nguid": "0B10062E179B4785A8F1B89A3A648E96", 00:16:58.554 "uuid": "0b10062e-179b-4785-a8f1-b89a3a648e96" 00:16:58.554 } 00:16:58.554 ] 00:16:58.554 } 00:16:58.554 ] 00:16:58.554 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2056776 00:16:58.554 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:58.554 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2047687 00:16:58.554 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2047687 ']' 00:16:58.554 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2047687 00:16:58.554 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:58.554 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.554 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2047687 00:16:58.554 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.554 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.554 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2047687' 00:16:58.554 killing process with pid 2047687 00:16:58.554 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2047687 00:16:58.554 21:09:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2047687 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2056872 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2056872' 00:16:58.815 Process pid: 2056872 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2056872 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2056872 ']' 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.815 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:58.815 [2024-12-05 21:10:00.147828] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:58.815 [2024-12-05 21:10:00.148777] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:16:58.815 [2024-12-05 21:10:00.148821] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.815 [2024-12-05 21:10:00.229880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:59.077 [2024-12-05 21:10:00.266401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.077 [2024-12-05 21:10:00.266434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.077 [2024-12-05 21:10:00.266442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.077 [2024-12-05 21:10:00.266449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.077 [2024-12-05 21:10:00.266455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.077 [2024-12-05 21:10:00.267913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.077 [2024-12-05 21:10:00.268125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.077 [2024-12-05 21:10:00.268125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.077 [2024-12-05 21:10:00.267970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.077 [2024-12-05 21:10:00.324593] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:59.077 [2024-12-05 21:10:00.324736] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:59.077 [2024-12-05 21:10:00.325646] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:59.077 [2024-12-05 21:10:00.326522] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:59.077 [2024-12-05 21:10:00.326585] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:59.647 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.647 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:59.647 21:10:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:00.586 21:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:00.845 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:00.845 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:00.845 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:00.845 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:00.845 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:01.105 Malloc1 00:17:01.105 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:01.105 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:01.363 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:01.623 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:01.623 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:01.623 21:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:01.623 Malloc2 00:17:01.623 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:01.882 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:02.141 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:02.141 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:02.141 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2056872 00:17:02.141 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2056872 ']' 00:17:02.141 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2056872 00:17:02.141 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:02.401 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.401 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2056872 00:17:02.401 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.401 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.401 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2056872' 00:17:02.401 killing process with pid 2056872 00:17:02.401 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2056872 00:17:02.401 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2056872 00:17:02.401 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:02.401 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:02.401 00:17:02.401 real 0m51.480s 00:17:02.401 user 3m17.464s 00:17:02.401 sys 0m2.765s 00:17:02.401 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.401 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:02.401 ************************************ 00:17:02.401 END TEST nvmf_vfio_user 00:17:02.401 ************************************ 00:17:02.401 21:10:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:02.401 21:10:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:02.401 21:10:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.401 21:10:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:02.662 ************************************ 00:17:02.662 START TEST nvmf_vfio_user_nvme_compliance 00:17:02.662 ************************************ 00:17:02.662 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:02.662 * Looking for test storage... 00:17:02.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:02.662 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:02.662 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:17:02.662 21:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:02.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.662 --rc genhtml_branch_coverage=1 00:17:02.662 --rc genhtml_function_coverage=1 00:17:02.662 --rc genhtml_legend=1 00:17:02.662 --rc geninfo_all_blocks=1 00:17:02.662 --rc geninfo_unexecuted_blocks=1 00:17:02.662 00:17:02.662 ' 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:02.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.662 --rc genhtml_branch_coverage=1 00:17:02.662 --rc genhtml_function_coverage=1 00:17:02.662 --rc genhtml_legend=1 00:17:02.662 --rc geninfo_all_blocks=1 00:17:02.662 --rc geninfo_unexecuted_blocks=1 00:17:02.662 00:17:02.662 ' 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:02.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.662 --rc genhtml_branch_coverage=1 00:17:02.662 --rc genhtml_function_coverage=1 00:17:02.662 --rc genhtml_legend=1 00:17:02.662 --rc geninfo_all_blocks=1 00:17:02.662 --rc geninfo_unexecuted_blocks=1 00:17:02.662 00:17:02.662 ' 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:02.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.662 --rc genhtml_branch_coverage=1 00:17:02.662 --rc genhtml_function_coverage=1 00:17:02.662 --rc genhtml_legend=1 00:17:02.662 --rc geninfo_all_blocks=1 00:17:02.662 --rc geninfo_unexecuted_blocks=1 00:17:02.662 00:17:02.662 ' 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.662 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.663 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:02.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2057862 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2057862' 00:17:02.923 Process pid: 2057862 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2057862 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2057862 ']' 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.923 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:02.923 [2024-12-05 21:10:04.156781] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:17:02.923 [2024-12-05 21:10:04.156833] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.923 [2024-12-05 21:10:04.236354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:02.923 [2024-12-05 21:10:04.271761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.923 [2024-12-05 21:10:04.271795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.923 [2024-12-05 21:10:04.271803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.923 [2024-12-05 21:10:04.271810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.923 [2024-12-05 21:10:04.271817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.923 [2024-12-05 21:10:04.273333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.923 [2024-12-05 21:10:04.273446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.923 [2024-12-05 21:10:04.273449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.865 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.865 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:17:03.866 21:10:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:04.805 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:04.805 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:04.805 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:04.805 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.805 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:04.805 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.805 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:04.805 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:04.805 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.805 21:10:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:04.805 malloc0 00:17:04.805 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.805 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:04.805 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.805 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:04.805 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.805 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:04.805 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.805 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:04.805 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.805 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:04.805 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.805 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:04.805 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.805 21:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:04.805 00:17:04.805 00:17:04.805 CUnit - A unit testing framework for C - Version 2.1-3 00:17:04.805 http://cunit.sourceforge.net/ 00:17:04.805 00:17:04.805 00:17:04.805 Suite: nvme_compliance 00:17:05.064 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-05 21:10:06.241320] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:05.065 [2024-12-05 21:10:06.242677] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:05.065 [2024-12-05 21:10:06.242688] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:05.065 [2024-12-05 21:10:06.242693] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:05.065 [2024-12-05 21:10:06.244334] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:05.065 passed 00:17:05.065 Test: admin_identify_ctrlr_verify_fused ...[2024-12-05 21:10:06.340915] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:05.065 [2024-12-05 21:10:06.343940] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:05.065 passed 00:17:05.065 Test: admin_identify_ns ...[2024-12-05 21:10:06.440115] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:05.065 [2024-12-05 21:10:06.499873] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:05.324 [2024-12-05 21:10:06.507874] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:05.324 [2024-12-05 21:10:06.528977] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:05.324 passed 00:17:05.324 Test: admin_get_features_mandatory_features ...[2024-12-05 21:10:06.622875] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:05.324 [2024-12-05 21:10:06.625900] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:05.324 passed 00:17:05.324 Test: admin_get_features_optional_features ...[2024-12-05 21:10:06.719417] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:05.325 [2024-12-05 21:10:06.722435] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:05.584 passed 00:17:05.584 Test: admin_set_features_number_of_queues ...[2024-12-05 21:10:06.816594] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:05.584 [2024-12-05 21:10:06.921983] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:05.584 passed 00:17:05.584 Test: admin_get_log_page_mandatory_logs ...[2024-12-05 21:10:07.012659] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:05.584 [2024-12-05 21:10:07.015678] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:05.845 passed 00:17:05.845 Test: admin_get_log_page_with_lpo ...[2024-12-05 21:10:07.107818] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:05.845 [2024-12-05 21:10:07.176875] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:05.845 [2024-12-05 21:10:07.189930] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:05.845 passed 00:17:06.157 Test: fabric_property_get ...[2024-12-05 21:10:07.281540] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:06.157 [2024-12-05 21:10:07.282781] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:06.157 [2024-12-05 21:10:07.284599] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:06.157 passed 00:17:06.157 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-05 21:10:07.376127] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:06.157 [2024-12-05 21:10:07.377387] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:06.157 [2024-12-05 21:10:07.380149] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:06.157 passed 00:17:06.157 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-05 21:10:07.468266] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:06.157 [2024-12-05 21:10:07.559875] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:06.449 [2024-12-05 21:10:07.575882] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:06.449 [2024-12-05 21:10:07.580962] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:06.449 passed 00:17:06.449 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-05 21:10:07.674978] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:06.449 [2024-12-05 21:10:07.676226] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:06.449 [2024-12-05 21:10:07.677996] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:06.449 passed 00:17:06.449 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-05 21:10:07.771100] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:06.449 [2024-12-05 21:10:07.850879] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:06.449 [2024-12-05 21:10:07.874872] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:06.449 [2024-12-05 21:10:07.879954] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:06.736 passed 00:17:06.736 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-05 21:10:07.969570] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:06.736 [2024-12-05 21:10:07.970825] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:06.736 [2024-12-05 21:10:07.970846] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:06.736 [2024-12-05 21:10:07.972584] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:06.736 passed 00:17:06.737 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-05 21:10:08.065732] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:06.737 [2024-12-05 21:10:08.156874] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:06.737 [2024-12-05 21:10:08.164873] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:07.004 [2024-12-05 21:10:08.172870] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:07.005 [2024-12-05 21:10:08.180873] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:07.005 [2024-12-05 21:10:08.209953] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:07.005 passed 00:17:07.005 Test: admin_create_io_sq_verify_pc ...[2024-12-05 21:10:08.303532] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:07.005 [2024-12-05 21:10:08.317877] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:07.005 [2024-12-05 21:10:08.335686] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:07.005 passed 00:17:07.005 Test: admin_create_io_qp_max_qps ...[2024-12-05 21:10:08.431203] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:08.390 [2024-12-05 21:10:09.547875] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:08.650 [2024-12-05 21:10:09.921195] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:08.650 passed 00:17:08.650 Test: admin_create_io_sq_shared_cq ...[2024-12-05 21:10:10.015127] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:08.912 [2024-12-05 21:10:10.146876] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:08.912 [2024-12-05 21:10:10.183940] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:08.912 passed 00:17:08.912 00:17:08.912 Run Summary: Type Total Ran Passed Failed Inactive 00:17:08.912 suites 1 1 n/a 0 0 00:17:08.912 tests 18 18 18 0 0 00:17:08.912 asserts 360 360 360 0 n/a 00:17:08.912 00:17:08.912 Elapsed time = 1.655 seconds 00:17:08.912 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2057862 00:17:08.912 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2057862 ']' 00:17:08.912 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2057862 00:17:08.912 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:17:08.912 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.912 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2057862 00:17:08.912 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:08.912 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:08.912 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2057862' 00:17:08.912 killing process with pid 2057862 00:17:08.912 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2057862 00:17:08.912 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2057862 00:17:09.174 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:09.174 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:09.174 00:17:09.174 real 0m6.576s 00:17:09.174 user 0m18.667s 00:17:09.174 sys 0m0.550s 00:17:09.174 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.174 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:09.174 ************************************ 00:17:09.174 END TEST nvmf_vfio_user_nvme_compliance 00:17:09.174 ************************************ 00:17:09.174 21:10:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:09.174 21:10:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:09.174 21:10:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.174 21:10:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:09.174 ************************************ 00:17:09.174 START TEST nvmf_vfio_user_fuzz 00:17:09.174 ************************************ 00:17:09.174 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:09.436 * Looking for test storage... 00:17:09.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:09.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.436 --rc genhtml_branch_coverage=1 00:17:09.436 --rc genhtml_function_coverage=1 00:17:09.436 --rc genhtml_legend=1 00:17:09.436 --rc geninfo_all_blocks=1 00:17:09.436 --rc geninfo_unexecuted_blocks=1 00:17:09.436 00:17:09.436 ' 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:09.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.436 --rc genhtml_branch_coverage=1 00:17:09.436 --rc genhtml_function_coverage=1 00:17:09.436 --rc genhtml_legend=1 00:17:09.436 --rc geninfo_all_blocks=1 00:17:09.436 --rc geninfo_unexecuted_blocks=1 00:17:09.436 00:17:09.436 ' 00:17:09.436 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:09.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.436 --rc genhtml_branch_coverage=1 00:17:09.436 --rc genhtml_function_coverage=1 00:17:09.437 --rc genhtml_legend=1 00:17:09.437 --rc geninfo_all_blocks=1 00:17:09.437 --rc geninfo_unexecuted_blocks=1 00:17:09.437 00:17:09.437 ' 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:09.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.437 --rc genhtml_branch_coverage=1 00:17:09.437 --rc genhtml_function_coverage=1 00:17:09.437 --rc genhtml_legend=1 00:17:09.437 --rc geninfo_all_blocks=1 00:17:09.437 --rc geninfo_unexecuted_blocks=1 00:17:09.437 00:17:09.437 ' 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:09.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2059236 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2059236' 00:17:09.437 Process pid: 2059236 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2059236 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2059236 ']' 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.437 21:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:10.383 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.383 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:17:10.383 21:10:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:11.325 malloc0 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:11.325 21:10:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:43.441 Fuzzing completed. Shutting down the fuzz application 00:17:43.441 00:17:43.441 Dumping successful admin opcodes: 00:17:43.441 9, 10, 00:17:43.441 Dumping successful io opcodes: 00:17:43.441 0, 00:17:43.441 NS: 0x20000081ef00 I/O qp, Total commands completed: 1091649, total successful commands: 4300, random_seed: 214661248 00:17:43.442 NS: 0x20000081ef00 admin qp, Total commands completed: 138352, total successful commands: 30, random_seed: 3635904320 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2059236 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2059236 ']' 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2059236 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2059236 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2059236' 00:17:43.442 killing process with pid 2059236 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2059236 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2059236 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:43.442 00:17:43.442 real 0m33.789s 00:17:43.442 user 0m38.109s 00:17:43.442 sys 0m26.009s 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:43.442 ************************************ 00:17:43.442 END TEST nvmf_vfio_user_fuzz 00:17:43.442 ************************************ 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:43.442 ************************************ 00:17:43.442 START TEST nvmf_auth_target 00:17:43.442 ************************************ 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:43.442 * Looking for test storage... 00:17:43.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:43.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.442 --rc genhtml_branch_coverage=1 00:17:43.442 --rc genhtml_function_coverage=1 00:17:43.442 --rc genhtml_legend=1 00:17:43.442 --rc geninfo_all_blocks=1 00:17:43.442 --rc geninfo_unexecuted_blocks=1 00:17:43.442 00:17:43.442 ' 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:43.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.442 --rc genhtml_branch_coverage=1 00:17:43.442 --rc genhtml_function_coverage=1 00:17:43.442 --rc genhtml_legend=1 00:17:43.442 --rc geninfo_all_blocks=1 00:17:43.442 --rc geninfo_unexecuted_blocks=1 00:17:43.442 00:17:43.442 ' 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:43.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.442 --rc genhtml_branch_coverage=1 00:17:43.442 --rc genhtml_function_coverage=1 00:17:43.442 --rc genhtml_legend=1 00:17:43.442 --rc geninfo_all_blocks=1 00:17:43.442 --rc geninfo_unexecuted_blocks=1 00:17:43.442 00:17:43.442 ' 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:43.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.442 --rc genhtml_branch_coverage=1 00:17:43.442 --rc genhtml_function_coverage=1 00:17:43.442 --rc genhtml_legend=1 00:17:43.442 --rc geninfo_all_blocks=1 00:17:43.442 --rc geninfo_unexecuted_blocks=1 00:17:43.442 00:17:43.442 ' 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.442 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:43.443 21:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:51.582 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:51.582 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:51.582 Found net devices under 0000:31:00.0: cvl_0_0 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:51.582 Found net devices under 0000:31:00.1: cvl_0_1 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.582 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:51.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:17:51.583 00:17:51.583 --- 10.0.0.2 ping statistics --- 00:17:51.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.583 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:17:51.583 00:17:51.583 --- 10.0.0.1 ping statistics --- 00:17:51.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.583 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:51.583 21:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:51.843 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:51.843 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:51.843 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:51.843 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.843 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2069946 00:17:51.843 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2069946 00:17:51.843 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:51.843 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2069946 ']' 00:17:51.843 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.843 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.843 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.843 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.843 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2070172 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dd3ef9a1401e2f9714a617f74cb0b37e9f6dba0e434c0a5a 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Lv2 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dd3ef9a1401e2f9714a617f74cb0b37e9f6dba0e434c0a5a 0 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dd3ef9a1401e2f9714a617f74cb0b37e9f6dba0e434c0a5a 0 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dd3ef9a1401e2f9714a617f74cb0b37e9f6dba0e434c0a5a 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Lv2 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Lv2 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Lv2 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cfecf26cab97c3ae9eb1f6836c46cb00bb89e05cb735afdeab3f0b8774b0ac2d 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.zGA 00:17:52.786 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cfecf26cab97c3ae9eb1f6836c46cb00bb89e05cb735afdeab3f0b8774b0ac2d 3 00:17:52.787 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cfecf26cab97c3ae9eb1f6836c46cb00bb89e05cb735afdeab3f0b8774b0ac2d 3 00:17:52.787 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:52.787 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:52.787 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cfecf26cab97c3ae9eb1f6836c46cb00bb89e05cb735afdeab3f0b8774b0ac2d 00:17:52.787 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:52.787 21:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.zGA 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.zGA 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.zGA 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8360080757f3784daf138a5dd7a502fa 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.pHH 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8360080757f3784daf138a5dd7a502fa 1 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8360080757f3784daf138a5dd7a502fa 1 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8360080757f3784daf138a5dd7a502fa 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.pHH 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.pHH 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.pHH 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cc8a3493fbbde021216295f755b80942a957bdebcd458e54 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.YOy 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cc8a3493fbbde021216295f755b80942a957bdebcd458e54 2 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cc8a3493fbbde021216295f755b80942a957bdebcd458e54 2 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cc8a3493fbbde021216295f755b80942a957bdebcd458e54 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.YOy 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.YOy 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.YOy 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fb5d94c0ce4572bd681c8447c6d8595ec51035b4f0ffdd40 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fXz 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fb5d94c0ce4572bd681c8447c6d8595ec51035b4f0ffdd40 2 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fb5d94c0ce4572bd681c8447c6d8595ec51035b4f0ffdd40 2 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fb5d94c0ce4572bd681c8447c6d8595ec51035b4f0ffdd40 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:52.787 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fXz 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fXz 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.fXz 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bd3e46abc0541bd3757fd267999e3cf1 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JKX 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bd3e46abc0541bd3757fd267999e3cf1 1 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bd3e46abc0541bd3757fd267999e3cf1 1 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bd3e46abc0541bd3757fd267999e3cf1 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JKX 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JKX 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.JKX 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b7e13a704cb9fa54bf1779593da67d0460d52cc115180ad1d107cdcb79881e7e 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.aHL 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b7e13a704cb9fa54bf1779593da67d0460d52cc115180ad1d107cdcb79881e7e 3 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b7e13a704cb9fa54bf1779593da67d0460d52cc115180ad1d107cdcb79881e7e 3 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b7e13a704cb9fa54bf1779593da67d0460d52cc115180ad1d107cdcb79881e7e 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.aHL 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.aHL 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.aHL 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2069946 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2069946 ']' 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.049 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.310 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.310 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:53.310 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2070172 /var/tmp/host.sock 00:17:53.310 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2070172 ']' 00:17:53.310 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:53.310 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.310 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:53.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:53.310 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.310 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.310 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.310 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:53.310 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:53.310 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.310 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.569 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.569 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:53.569 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Lv2 00:17:53.569 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.569 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.569 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.569 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Lv2 00:17:53.569 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Lv2 00:17:53.569 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.zGA ]] 00:17:53.569 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zGA 00:17:53.569 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.569 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.569 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.569 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zGA 00:17:53.569 21:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zGA 00:17:53.830 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:53.830 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.pHH 00:17:53.830 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.830 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.830 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.830 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.pHH 00:17:53.830 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.pHH 00:17:54.095 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.YOy ]] 00:17:54.095 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YOy 00:17:54.095 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.095 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.095 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.095 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YOy 00:17:54.095 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YOy 00:17:54.095 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:54.095 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fXz 00:17:54.095 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.095 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.095 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.096 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.fXz 00:17:54.096 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.fXz 00:17:54.355 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.JKX ]] 00:17:54.355 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JKX 00:17:54.355 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.355 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.355 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.355 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JKX 00:17:54.355 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JKX 00:17:54.355 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:54.355 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.aHL 00:17:54.615 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.615 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.615 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.616 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.aHL 00:17:54.616 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.aHL 00:17:54.616 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:54.616 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:54.616 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.616 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.616 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:54.616 21:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:54.876 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:54.876 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.876 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:54.876 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:54.876 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:54.876 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.876 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.876 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.876 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.876 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.876 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.876 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.876 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.137 00:17:55.137 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.137 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.137 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.398 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.398 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.398 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.398 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.398 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.398 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.398 { 00:17:55.398 "cntlid": 1, 00:17:55.398 "qid": 0, 00:17:55.398 "state": "enabled", 00:17:55.398 "thread": "nvmf_tgt_poll_group_000", 00:17:55.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:55.398 "listen_address": { 00:17:55.398 "trtype": "TCP", 00:17:55.398 "adrfam": "IPv4", 00:17:55.398 "traddr": "10.0.0.2", 00:17:55.398 "trsvcid": "4420" 00:17:55.398 }, 00:17:55.398 "peer_address": { 00:17:55.398 "trtype": "TCP", 00:17:55.398 "adrfam": "IPv4", 00:17:55.398 "traddr": "10.0.0.1", 00:17:55.398 "trsvcid": "47950" 00:17:55.398 }, 00:17:55.398 "auth": { 00:17:55.398 "state": "completed", 00:17:55.398 "digest": "sha256", 00:17:55.398 "dhgroup": "null" 00:17:55.398 } 00:17:55.398 } 00:17:55.398 ]' 00:17:55.398 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.398 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.398 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.398 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:55.398 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.398 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.398 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.398 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.663 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:17:55.663 21:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:17:56.235 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.514 21:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.775 00:17:56.776 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.776 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.776 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.036 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.037 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.037 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.037 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.037 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.037 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.037 { 00:17:57.037 "cntlid": 3, 00:17:57.037 "qid": 0, 00:17:57.037 "state": "enabled", 00:17:57.037 "thread": "nvmf_tgt_poll_group_000", 00:17:57.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:57.037 "listen_address": { 00:17:57.037 "trtype": "TCP", 00:17:57.037 "adrfam": "IPv4", 00:17:57.037 "traddr": "10.0.0.2", 00:17:57.037 "trsvcid": "4420" 00:17:57.037 }, 00:17:57.037 "peer_address": { 00:17:57.037 "trtype": "TCP", 00:17:57.037 "adrfam": "IPv4", 00:17:57.037 "traddr": "10.0.0.1", 00:17:57.037 "trsvcid": "47976" 00:17:57.037 }, 00:17:57.037 "auth": { 00:17:57.037 "state": "completed", 00:17:57.037 "digest": "sha256", 00:17:57.037 "dhgroup": "null" 00:17:57.037 } 00:17:57.037 } 00:17:57.037 ]' 00:17:57.037 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.037 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.037 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.037 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:57.037 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.037 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.037 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.037 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.299 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:17:57.299 21:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:17:58.242 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.243 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.504 00:17:58.504 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.504 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.504 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.765 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.765 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.765 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.765 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.765 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.765 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.765 { 00:17:58.765 "cntlid": 5, 00:17:58.765 "qid": 0, 00:17:58.765 "state": "enabled", 00:17:58.765 "thread": "nvmf_tgt_poll_group_000", 00:17:58.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:58.765 "listen_address": { 00:17:58.765 "trtype": "TCP", 00:17:58.765 "adrfam": "IPv4", 00:17:58.765 "traddr": "10.0.0.2", 00:17:58.765 "trsvcid": "4420" 00:17:58.765 }, 00:17:58.765 "peer_address": { 00:17:58.765 "trtype": "TCP", 00:17:58.765 "adrfam": "IPv4", 00:17:58.765 "traddr": "10.0.0.1", 00:17:58.765 "trsvcid": "48000" 00:17:58.765 }, 00:17:58.765 "auth": { 00:17:58.765 "state": "completed", 00:17:58.765 "digest": "sha256", 00:17:58.765 "dhgroup": "null" 00:17:58.765 } 00:17:58.765 } 00:17:58.765 ]' 00:17:58.765 21:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.765 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.765 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.765 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:58.765 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.765 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.765 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.765 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.025 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:17:59.025 21:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:17:59.597 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.858 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:59.858 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.858 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.858 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.858 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.858 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:59.858 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:59.858 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:59.858 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.858 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:59.858 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:59.858 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:59.858 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.859 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:59.859 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.859 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.859 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.859 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:59.859 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:59.859 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.119 00:18:00.119 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.119 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.119 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.380 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.380 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.380 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.380 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.380 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.380 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.380 { 00:18:00.380 "cntlid": 7, 00:18:00.380 "qid": 0, 00:18:00.380 "state": "enabled", 00:18:00.380 "thread": "nvmf_tgt_poll_group_000", 00:18:00.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:00.380 "listen_address": { 00:18:00.380 "trtype": "TCP", 00:18:00.380 "adrfam": "IPv4", 00:18:00.380 "traddr": "10.0.0.2", 00:18:00.380 "trsvcid": "4420" 00:18:00.380 }, 00:18:00.380 "peer_address": { 00:18:00.380 "trtype": "TCP", 00:18:00.380 "adrfam": "IPv4", 00:18:00.380 "traddr": "10.0.0.1", 00:18:00.380 "trsvcid": "48016" 00:18:00.380 }, 00:18:00.380 "auth": { 00:18:00.380 "state": "completed", 00:18:00.380 "digest": "sha256", 00:18:00.380 "dhgroup": "null" 00:18:00.380 } 00:18:00.380 } 00:18:00.380 ]' 00:18:00.380 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.380 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.380 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.380 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:00.380 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.380 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.380 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.380 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.641 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:00.641 21:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:01.584 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.584 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:01.584 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.584 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.584 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.584 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.584 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.585 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:01.585 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:01.585 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:01.585 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.585 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:01.585 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:01.585 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:01.585 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.585 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.585 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.585 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.585 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.585 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.585 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.585 21:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.846 00:18:01.846 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.846 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.846 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.107 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.107 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.107 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.107 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.107 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.107 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.107 { 00:18:02.107 "cntlid": 9, 00:18:02.107 "qid": 0, 00:18:02.107 "state": "enabled", 00:18:02.107 "thread": "nvmf_tgt_poll_group_000", 00:18:02.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:02.107 "listen_address": { 00:18:02.107 "trtype": "TCP", 00:18:02.107 "adrfam": "IPv4", 00:18:02.107 "traddr": "10.0.0.2", 00:18:02.107 "trsvcid": "4420" 00:18:02.107 }, 00:18:02.107 "peer_address": { 00:18:02.107 "trtype": "TCP", 00:18:02.107 "adrfam": "IPv4", 00:18:02.107 "traddr": "10.0.0.1", 00:18:02.107 "trsvcid": "37596" 00:18:02.107 }, 00:18:02.107 "auth": { 00:18:02.107 "state": "completed", 00:18:02.107 "digest": "sha256", 00:18:02.107 "dhgroup": "ffdhe2048" 00:18:02.107 } 00:18:02.107 } 00:18:02.107 ]' 00:18:02.107 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.107 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.107 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.107 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:02.107 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.107 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.107 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.107 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.368 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:02.368 21:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:03.312 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.312 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:03.312 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.312 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.312 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.312 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.313 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.313 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.313 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:03.313 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.313 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:03.313 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:03.313 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:03.313 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.313 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.313 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.313 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.313 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.313 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.313 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.313 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.574 00:18:03.574 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.574 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.574 21:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.835 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.835 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.835 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.835 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.835 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.835 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.835 { 00:18:03.835 "cntlid": 11, 00:18:03.835 "qid": 0, 00:18:03.835 "state": "enabled", 00:18:03.835 "thread": "nvmf_tgt_poll_group_000", 00:18:03.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:03.835 "listen_address": { 00:18:03.835 "trtype": "TCP", 00:18:03.835 "adrfam": "IPv4", 00:18:03.835 "traddr": "10.0.0.2", 00:18:03.835 "trsvcid": "4420" 00:18:03.835 }, 00:18:03.835 "peer_address": { 00:18:03.835 "trtype": "TCP", 00:18:03.835 "adrfam": "IPv4", 00:18:03.835 "traddr": "10.0.0.1", 00:18:03.835 "trsvcid": "37612" 00:18:03.835 }, 00:18:03.835 "auth": { 00:18:03.835 "state": "completed", 00:18:03.835 "digest": "sha256", 00:18:03.835 "dhgroup": "ffdhe2048" 00:18:03.835 } 00:18:03.835 } 00:18:03.835 ]' 00:18:03.835 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.835 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.835 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.835 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:03.835 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.835 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.835 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.835 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.096 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:04.096 21:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:04.666 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.927 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.188 00:18:05.188 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.188 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.188 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.465 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.465 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.465 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.465 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.465 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.465 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.465 { 00:18:05.465 "cntlid": 13, 00:18:05.466 "qid": 0, 00:18:05.466 "state": "enabled", 00:18:05.466 "thread": "nvmf_tgt_poll_group_000", 00:18:05.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:05.466 "listen_address": { 00:18:05.466 "trtype": "TCP", 00:18:05.466 "adrfam": "IPv4", 00:18:05.466 "traddr": "10.0.0.2", 00:18:05.466 "trsvcid": "4420" 00:18:05.466 }, 00:18:05.466 "peer_address": { 00:18:05.466 "trtype": "TCP", 00:18:05.466 "adrfam": "IPv4", 00:18:05.466 "traddr": "10.0.0.1", 00:18:05.466 "trsvcid": "37638" 00:18:05.466 }, 00:18:05.466 "auth": { 00:18:05.466 "state": "completed", 00:18:05.466 "digest": "sha256", 00:18:05.466 "dhgroup": "ffdhe2048" 00:18:05.466 } 00:18:05.466 } 00:18:05.466 ]' 00:18:05.466 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.466 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.466 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.466 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.466 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.466 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.466 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.466 21:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.726 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:05.726 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.670 21:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.932 00:18:06.932 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.932 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.932 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.193 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.193 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.193 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.193 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.193 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.193 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.193 { 00:18:07.193 "cntlid": 15, 00:18:07.193 "qid": 0, 00:18:07.193 "state": "enabled", 00:18:07.193 "thread": "nvmf_tgt_poll_group_000", 00:18:07.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:07.193 "listen_address": { 00:18:07.193 "trtype": "TCP", 00:18:07.193 "adrfam": "IPv4", 00:18:07.193 "traddr": "10.0.0.2", 00:18:07.193 "trsvcid": "4420" 00:18:07.193 }, 00:18:07.193 "peer_address": { 00:18:07.193 "trtype": "TCP", 00:18:07.193 "adrfam": "IPv4", 00:18:07.193 "traddr": "10.0.0.1", 00:18:07.193 "trsvcid": "37662" 00:18:07.193 }, 00:18:07.193 "auth": { 00:18:07.193 "state": "completed", 00:18:07.193 "digest": "sha256", 00:18:07.193 "dhgroup": "ffdhe2048" 00:18:07.193 } 00:18:07.193 } 00:18:07.193 ]' 00:18:07.193 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.193 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.193 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.193 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.193 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.193 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.193 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.193 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.454 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:07.454 21:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:08.026 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.288 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.549 00:18:08.549 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.549 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.549 21:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.810 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.810 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.810 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.810 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.810 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.810 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.810 { 00:18:08.810 "cntlid": 17, 00:18:08.810 "qid": 0, 00:18:08.810 "state": "enabled", 00:18:08.810 "thread": "nvmf_tgt_poll_group_000", 00:18:08.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:08.810 "listen_address": { 00:18:08.810 "trtype": "TCP", 00:18:08.810 "adrfam": "IPv4", 00:18:08.810 "traddr": "10.0.0.2", 00:18:08.810 "trsvcid": "4420" 00:18:08.810 }, 00:18:08.810 "peer_address": { 00:18:08.810 "trtype": "TCP", 00:18:08.810 "adrfam": "IPv4", 00:18:08.810 "traddr": "10.0.0.1", 00:18:08.810 "trsvcid": "37704" 00:18:08.810 }, 00:18:08.810 "auth": { 00:18:08.810 "state": "completed", 00:18:08.810 "digest": "sha256", 00:18:08.810 "dhgroup": "ffdhe3072" 00:18:08.810 } 00:18:08.810 } 00:18:08.810 ]' 00:18:08.810 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.811 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.811 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.811 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:08.811 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.071 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.071 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.071 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.071 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:09.071 21:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.013 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.014 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.014 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.273 00:18:10.273 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.273 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.273 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.533 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.533 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.533 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.533 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.533 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.533 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.533 { 00:18:10.533 "cntlid": 19, 00:18:10.533 "qid": 0, 00:18:10.533 "state": "enabled", 00:18:10.533 "thread": "nvmf_tgt_poll_group_000", 00:18:10.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:10.533 "listen_address": { 00:18:10.533 "trtype": "TCP", 00:18:10.533 "adrfam": "IPv4", 00:18:10.533 "traddr": "10.0.0.2", 00:18:10.533 "trsvcid": "4420" 00:18:10.533 }, 00:18:10.533 "peer_address": { 00:18:10.533 "trtype": "TCP", 00:18:10.533 "adrfam": "IPv4", 00:18:10.533 "traddr": "10.0.0.1", 00:18:10.533 "trsvcid": "37732" 00:18:10.533 }, 00:18:10.533 "auth": { 00:18:10.533 "state": "completed", 00:18:10.533 "digest": "sha256", 00:18:10.533 "dhgroup": "ffdhe3072" 00:18:10.533 } 00:18:10.533 } 00:18:10.533 ]' 00:18:10.533 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.533 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.533 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.533 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.533 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.533 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.533 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.533 21:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.792 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:10.792 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:11.747 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.747 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:11.747 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.747 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.747 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.747 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.747 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:11.747 21:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:11.747 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:11.747 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.747 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:11.747 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:11.747 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:11.747 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.747 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.747 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.747 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.747 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.747 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.747 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.747 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.006 00:18:12.006 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.006 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.006 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.265 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.265 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.265 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.266 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.266 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.266 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.266 { 00:18:12.266 "cntlid": 21, 00:18:12.266 "qid": 0, 00:18:12.266 "state": "enabled", 00:18:12.266 "thread": "nvmf_tgt_poll_group_000", 00:18:12.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:12.266 "listen_address": { 00:18:12.266 "trtype": "TCP", 00:18:12.266 "adrfam": "IPv4", 00:18:12.266 "traddr": "10.0.0.2", 00:18:12.266 "trsvcid": "4420" 00:18:12.266 }, 00:18:12.266 "peer_address": { 00:18:12.266 "trtype": "TCP", 00:18:12.266 "adrfam": "IPv4", 00:18:12.266 "traddr": "10.0.0.1", 00:18:12.266 "trsvcid": "38630" 00:18:12.266 }, 00:18:12.266 "auth": { 00:18:12.266 "state": "completed", 00:18:12.266 "digest": "sha256", 00:18:12.266 "dhgroup": "ffdhe3072" 00:18:12.266 } 00:18:12.266 } 00:18:12.266 ]' 00:18:12.266 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.266 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.266 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.266 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:12.266 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.266 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.266 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.266 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.524 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:12.524 21:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:13.463 21:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:13.724 00:18:13.724 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.724 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.724 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.985 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.985 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.985 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.985 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.985 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.985 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.985 { 00:18:13.985 "cntlid": 23, 00:18:13.985 "qid": 0, 00:18:13.985 "state": "enabled", 00:18:13.985 "thread": "nvmf_tgt_poll_group_000", 00:18:13.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:13.985 "listen_address": { 00:18:13.985 "trtype": "TCP", 00:18:13.985 "adrfam": "IPv4", 00:18:13.985 "traddr": "10.0.0.2", 00:18:13.985 "trsvcid": "4420" 00:18:13.985 }, 00:18:13.985 "peer_address": { 00:18:13.985 "trtype": "TCP", 00:18:13.985 "adrfam": "IPv4", 00:18:13.985 "traddr": "10.0.0.1", 00:18:13.985 "trsvcid": "38658" 00:18:13.985 }, 00:18:13.985 "auth": { 00:18:13.985 "state": "completed", 00:18:13.985 "digest": "sha256", 00:18:13.985 "dhgroup": "ffdhe3072" 00:18:13.985 } 00:18:13.985 } 00:18:13.985 ]' 00:18:13.985 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.985 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.985 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.985 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.985 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.985 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.985 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.985 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.245 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:14.245 21:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.188 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.469 00:18:15.469 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.469 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.469 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.730 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.730 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.730 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.730 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.730 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.730 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.730 { 00:18:15.730 "cntlid": 25, 00:18:15.730 "qid": 0, 00:18:15.730 "state": "enabled", 00:18:15.730 "thread": "nvmf_tgt_poll_group_000", 00:18:15.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:15.730 "listen_address": { 00:18:15.730 "trtype": "TCP", 00:18:15.730 "adrfam": "IPv4", 00:18:15.730 "traddr": "10.0.0.2", 00:18:15.730 "trsvcid": "4420" 00:18:15.730 }, 00:18:15.730 "peer_address": { 00:18:15.730 "trtype": "TCP", 00:18:15.730 "adrfam": "IPv4", 00:18:15.730 "traddr": "10.0.0.1", 00:18:15.730 "trsvcid": "38682" 00:18:15.730 }, 00:18:15.730 "auth": { 00:18:15.730 "state": "completed", 00:18:15.730 "digest": "sha256", 00:18:15.730 "dhgroup": "ffdhe4096" 00:18:15.730 } 00:18:15.730 } 00:18:15.730 ]' 00:18:15.730 21:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.730 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.730 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.730 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:15.730 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.730 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.730 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.730 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.990 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:15.990 21:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.931 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.191 00:18:17.191 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.192 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.192 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.453 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.453 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.453 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.453 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.453 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.453 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.453 { 00:18:17.453 "cntlid": 27, 00:18:17.453 "qid": 0, 00:18:17.453 "state": "enabled", 00:18:17.453 "thread": "nvmf_tgt_poll_group_000", 00:18:17.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:17.453 "listen_address": { 00:18:17.453 "trtype": "TCP", 00:18:17.453 "adrfam": "IPv4", 00:18:17.453 "traddr": "10.0.0.2", 00:18:17.453 "trsvcid": "4420" 00:18:17.453 }, 00:18:17.453 "peer_address": { 00:18:17.453 "trtype": "TCP", 00:18:17.453 "adrfam": "IPv4", 00:18:17.453 "traddr": "10.0.0.1", 00:18:17.453 "trsvcid": "38700" 00:18:17.453 }, 00:18:17.453 "auth": { 00:18:17.453 "state": "completed", 00:18:17.453 "digest": "sha256", 00:18:17.453 "dhgroup": "ffdhe4096" 00:18:17.453 } 00:18:17.453 } 00:18:17.453 ]' 00:18:17.453 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.453 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.453 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.453 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:17.453 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.453 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.453 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.453 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.714 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:17.714 21:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:18.330 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.330 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:18.330 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.330 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.605 21:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.867 00:18:18.867 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.867 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.867 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.129 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.129 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.129 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.129 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.129 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.129 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.129 { 00:18:19.129 "cntlid": 29, 00:18:19.129 "qid": 0, 00:18:19.129 "state": "enabled", 00:18:19.129 "thread": "nvmf_tgt_poll_group_000", 00:18:19.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:19.129 "listen_address": { 00:18:19.129 "trtype": "TCP", 00:18:19.129 "adrfam": "IPv4", 00:18:19.129 "traddr": "10.0.0.2", 00:18:19.129 "trsvcid": "4420" 00:18:19.129 }, 00:18:19.129 "peer_address": { 00:18:19.129 "trtype": "TCP", 00:18:19.129 "adrfam": "IPv4", 00:18:19.129 "traddr": "10.0.0.1", 00:18:19.129 "trsvcid": "38736" 00:18:19.129 }, 00:18:19.129 "auth": { 00:18:19.129 "state": "completed", 00:18:19.129 "digest": "sha256", 00:18:19.129 "dhgroup": "ffdhe4096" 00:18:19.129 } 00:18:19.129 } 00:18:19.129 ]' 00:18:19.129 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.129 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.129 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.129 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.129 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.129 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.129 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.129 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.390 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:19.390 21:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.345 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.606 00:18:20.606 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.606 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.606 21:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.867 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.867 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.867 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.867 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.867 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.867 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.867 { 00:18:20.867 "cntlid": 31, 00:18:20.867 "qid": 0, 00:18:20.867 "state": "enabled", 00:18:20.867 "thread": "nvmf_tgt_poll_group_000", 00:18:20.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:20.867 "listen_address": { 00:18:20.867 "trtype": "TCP", 00:18:20.867 "adrfam": "IPv4", 00:18:20.867 "traddr": "10.0.0.2", 00:18:20.867 "trsvcid": "4420" 00:18:20.867 }, 00:18:20.867 "peer_address": { 00:18:20.867 "trtype": "TCP", 00:18:20.867 "adrfam": "IPv4", 00:18:20.867 "traddr": "10.0.0.1", 00:18:20.867 "trsvcid": "43200" 00:18:20.867 }, 00:18:20.867 "auth": { 00:18:20.867 "state": "completed", 00:18:20.867 "digest": "sha256", 00:18:20.867 "dhgroup": "ffdhe4096" 00:18:20.867 } 00:18:20.867 } 00:18:20.867 ]' 00:18:20.867 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.867 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.867 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.867 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.867 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.127 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.127 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.127 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.128 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:21.128 21:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.071 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.643 00:18:22.643 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.643 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.643 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.643 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.643 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.643 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.643 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.643 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.643 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.643 { 00:18:22.643 "cntlid": 33, 00:18:22.643 "qid": 0, 00:18:22.643 "state": "enabled", 00:18:22.643 "thread": "nvmf_tgt_poll_group_000", 00:18:22.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:22.643 "listen_address": { 00:18:22.643 "trtype": "TCP", 00:18:22.643 "adrfam": "IPv4", 00:18:22.643 "traddr": "10.0.0.2", 00:18:22.643 "trsvcid": "4420" 00:18:22.643 }, 00:18:22.643 "peer_address": { 00:18:22.643 "trtype": "TCP", 00:18:22.643 "adrfam": "IPv4", 00:18:22.643 "traddr": "10.0.0.1", 00:18:22.643 "trsvcid": "43232" 00:18:22.643 }, 00:18:22.643 "auth": { 00:18:22.643 "state": "completed", 00:18:22.643 "digest": "sha256", 00:18:22.643 "dhgroup": "ffdhe6144" 00:18:22.643 } 00:18:22.643 } 00:18:22.643 ]' 00:18:22.643 21:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.643 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.643 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.904 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:22.904 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.904 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.904 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.904 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.904 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:22.904 21:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:23.855 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.855 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:23.855 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.855 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.855 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.855 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.855 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:23.855 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:23.855 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:23.855 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.856 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:23.856 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:23.856 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:23.856 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.856 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.856 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.856 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.856 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.856 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.856 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.856 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.437 00:18:24.437 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.437 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.437 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.437 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.437 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.437 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.437 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.437 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.437 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.437 { 00:18:24.437 "cntlid": 35, 00:18:24.437 "qid": 0, 00:18:24.437 "state": "enabled", 00:18:24.437 "thread": "nvmf_tgt_poll_group_000", 00:18:24.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:24.437 "listen_address": { 00:18:24.437 "trtype": "TCP", 00:18:24.437 "adrfam": "IPv4", 00:18:24.437 "traddr": "10.0.0.2", 00:18:24.437 "trsvcid": "4420" 00:18:24.437 }, 00:18:24.437 "peer_address": { 00:18:24.437 "trtype": "TCP", 00:18:24.437 "adrfam": "IPv4", 00:18:24.437 "traddr": "10.0.0.1", 00:18:24.437 "trsvcid": "43252" 00:18:24.437 }, 00:18:24.437 "auth": { 00:18:24.437 "state": "completed", 00:18:24.437 "digest": "sha256", 00:18:24.437 "dhgroup": "ffdhe6144" 00:18:24.437 } 00:18:24.437 } 00:18:24.437 ]' 00:18:24.437 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.437 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.437 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.698 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:24.698 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.699 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.699 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.699 21:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.699 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:24.699 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:25.642 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.642 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:25.642 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.642 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.642 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.642 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.642 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:25.642 21:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:25.903 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:25.903 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.904 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:25.904 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:25.904 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:25.904 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.904 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.904 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.904 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.904 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.904 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.904 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.904 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.165 00:18:26.166 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.166 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.166 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.427 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.427 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.427 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.427 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.427 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.427 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.427 { 00:18:26.427 "cntlid": 37, 00:18:26.427 "qid": 0, 00:18:26.427 "state": "enabled", 00:18:26.427 "thread": "nvmf_tgt_poll_group_000", 00:18:26.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:26.427 "listen_address": { 00:18:26.427 "trtype": "TCP", 00:18:26.427 "adrfam": "IPv4", 00:18:26.427 "traddr": "10.0.0.2", 00:18:26.427 "trsvcid": "4420" 00:18:26.427 }, 00:18:26.427 "peer_address": { 00:18:26.427 "trtype": "TCP", 00:18:26.427 "adrfam": "IPv4", 00:18:26.427 "traddr": "10.0.0.1", 00:18:26.427 "trsvcid": "43272" 00:18:26.427 }, 00:18:26.427 "auth": { 00:18:26.427 "state": "completed", 00:18:26.427 "digest": "sha256", 00:18:26.427 "dhgroup": "ffdhe6144" 00:18:26.427 } 00:18:26.427 } 00:18:26.427 ]' 00:18:26.427 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.427 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:26.427 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.427 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.427 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.427 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.427 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.427 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.688 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:26.688 21:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.628 21:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.887 00:18:27.887 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.887 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.887 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.147 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.147 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.147 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.147 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.147 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.147 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.147 { 00:18:28.147 "cntlid": 39, 00:18:28.147 "qid": 0, 00:18:28.147 "state": "enabled", 00:18:28.147 "thread": "nvmf_tgt_poll_group_000", 00:18:28.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:28.147 "listen_address": { 00:18:28.147 "trtype": "TCP", 00:18:28.147 "adrfam": "IPv4", 00:18:28.147 "traddr": "10.0.0.2", 00:18:28.147 "trsvcid": "4420" 00:18:28.147 }, 00:18:28.147 "peer_address": { 00:18:28.147 "trtype": "TCP", 00:18:28.147 "adrfam": "IPv4", 00:18:28.147 "traddr": "10.0.0.1", 00:18:28.147 "trsvcid": "43306" 00:18:28.147 }, 00:18:28.147 "auth": { 00:18:28.148 "state": "completed", 00:18:28.148 "digest": "sha256", 00:18:28.148 "dhgroup": "ffdhe6144" 00:18:28.148 } 00:18:28.148 } 00:18:28.148 ]' 00:18:28.148 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.148 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.148 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.148 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:28.148 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.407 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.407 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.407 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.407 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:28.407 21:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:29.345 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.345 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:29.345 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.345 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.345 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.345 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.345 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.345 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:29.345 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:29.603 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:29.603 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.603 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:29.603 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:29.603 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:29.603 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.603 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.603 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.603 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.603 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.603 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.603 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.603 21:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.169 00:18:30.169 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.169 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.169 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.169 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.169 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.169 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.169 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.169 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.169 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.169 { 00:18:30.169 "cntlid": 41, 00:18:30.169 "qid": 0, 00:18:30.169 "state": "enabled", 00:18:30.169 "thread": "nvmf_tgt_poll_group_000", 00:18:30.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:30.169 "listen_address": { 00:18:30.169 "trtype": "TCP", 00:18:30.169 "adrfam": "IPv4", 00:18:30.169 "traddr": "10.0.0.2", 00:18:30.169 "trsvcid": "4420" 00:18:30.169 }, 00:18:30.169 "peer_address": { 00:18:30.169 "trtype": "TCP", 00:18:30.169 "adrfam": "IPv4", 00:18:30.170 "traddr": "10.0.0.1", 00:18:30.170 "trsvcid": "43332" 00:18:30.170 }, 00:18:30.170 "auth": { 00:18:30.170 "state": "completed", 00:18:30.170 "digest": "sha256", 00:18:30.170 "dhgroup": "ffdhe8192" 00:18:30.170 } 00:18:30.170 } 00:18:30.170 ]' 00:18:30.170 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.427 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.427 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.427 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.427 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.427 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.427 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.427 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.427 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:30.427 21:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:31.363 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.363 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.363 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.363 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.363 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.363 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.363 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:31.363 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:31.624 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:31.624 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.624 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:31.624 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:31.624 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:31.624 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.624 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.624 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.624 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.624 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.624 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.624 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.624 21:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.195 00:18:32.195 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.195 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.195 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.195 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.195 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.195 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.195 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.195 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.195 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.195 { 00:18:32.195 "cntlid": 43, 00:18:32.195 "qid": 0, 00:18:32.195 "state": "enabled", 00:18:32.195 "thread": "nvmf_tgt_poll_group_000", 00:18:32.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:32.195 "listen_address": { 00:18:32.195 "trtype": "TCP", 00:18:32.195 "adrfam": "IPv4", 00:18:32.195 "traddr": "10.0.0.2", 00:18:32.195 "trsvcid": "4420" 00:18:32.195 }, 00:18:32.195 "peer_address": { 00:18:32.195 "trtype": "TCP", 00:18:32.195 "adrfam": "IPv4", 00:18:32.195 "traddr": "10.0.0.1", 00:18:32.195 "trsvcid": "42490" 00:18:32.195 }, 00:18:32.195 "auth": { 00:18:32.195 "state": "completed", 00:18:32.195 "digest": "sha256", 00:18:32.195 "dhgroup": "ffdhe8192" 00:18:32.195 } 00:18:32.195 } 00:18:32.195 ]' 00:18:32.195 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.196 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.196 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.456 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.456 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.456 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.456 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.456 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.456 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:32.456 21:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:33.397 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.398 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.398 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.398 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.398 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.398 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.398 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:33.398 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:33.658 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:33.658 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.658 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:33.658 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:33.658 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:33.659 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.659 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.659 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.659 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.659 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.659 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.659 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.659 21:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.919 00:18:34.179 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.179 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.179 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.179 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.179 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.179 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.179 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.179 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.179 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.179 { 00:18:34.179 "cntlid": 45, 00:18:34.179 "qid": 0, 00:18:34.179 "state": "enabled", 00:18:34.179 "thread": "nvmf_tgt_poll_group_000", 00:18:34.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:34.179 "listen_address": { 00:18:34.179 "trtype": "TCP", 00:18:34.179 "adrfam": "IPv4", 00:18:34.179 "traddr": "10.0.0.2", 00:18:34.179 "trsvcid": "4420" 00:18:34.179 }, 00:18:34.179 "peer_address": { 00:18:34.179 "trtype": "TCP", 00:18:34.179 "adrfam": "IPv4", 00:18:34.179 "traddr": "10.0.0.1", 00:18:34.179 "trsvcid": "42520" 00:18:34.179 }, 00:18:34.179 "auth": { 00:18:34.179 "state": "completed", 00:18:34.179 "digest": "sha256", 00:18:34.179 "dhgroup": "ffdhe8192" 00:18:34.179 } 00:18:34.179 } 00:18:34.179 ]' 00:18:34.179 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.179 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.179 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.439 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.439 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.439 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.439 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.439 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.439 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:34.439 21:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.379 21:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.949 00:18:35.949 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.949 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.949 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.209 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.209 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.209 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.209 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.209 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.209 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.209 { 00:18:36.209 "cntlid": 47, 00:18:36.209 "qid": 0, 00:18:36.209 "state": "enabled", 00:18:36.209 "thread": "nvmf_tgt_poll_group_000", 00:18:36.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:36.209 "listen_address": { 00:18:36.209 "trtype": "TCP", 00:18:36.209 "adrfam": "IPv4", 00:18:36.209 "traddr": "10.0.0.2", 00:18:36.209 "trsvcid": "4420" 00:18:36.209 }, 00:18:36.209 "peer_address": { 00:18:36.209 "trtype": "TCP", 00:18:36.209 "adrfam": "IPv4", 00:18:36.209 "traddr": "10.0.0.1", 00:18:36.209 "trsvcid": "42538" 00:18:36.209 }, 00:18:36.209 "auth": { 00:18:36.209 "state": "completed", 00:18:36.209 "digest": "sha256", 00:18:36.209 "dhgroup": "ffdhe8192" 00:18:36.209 } 00:18:36.209 } 00:18:36.209 ]' 00:18:36.209 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.209 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.209 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.209 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.209 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.209 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.209 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.209 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.469 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:36.469 21:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:37.410 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.411 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.671 00:18:37.671 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.671 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.671 21:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.932 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.932 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.932 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.932 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.932 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.932 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.932 { 00:18:37.932 "cntlid": 49, 00:18:37.932 "qid": 0, 00:18:37.932 "state": "enabled", 00:18:37.932 "thread": "nvmf_tgt_poll_group_000", 00:18:37.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:37.932 "listen_address": { 00:18:37.932 "trtype": "TCP", 00:18:37.932 "adrfam": "IPv4", 00:18:37.932 "traddr": "10.0.0.2", 00:18:37.932 "trsvcid": "4420" 00:18:37.932 }, 00:18:37.932 "peer_address": { 00:18:37.932 "trtype": "TCP", 00:18:37.932 "adrfam": "IPv4", 00:18:37.932 "traddr": "10.0.0.1", 00:18:37.932 "trsvcid": "42558" 00:18:37.932 }, 00:18:37.932 "auth": { 00:18:37.932 "state": "completed", 00:18:37.932 "digest": "sha384", 00:18:37.932 "dhgroup": "null" 00:18:37.932 } 00:18:37.932 } 00:18:37.932 ]' 00:18:37.932 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.932 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.932 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.932 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:37.932 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.932 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.932 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.932 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.192 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:38.192 21:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:38.762 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.762 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:38.762 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.762 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.762 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.762 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.762 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:38.762 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:39.022 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:39.022 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.022 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:39.022 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:39.022 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:39.022 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.022 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.022 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.022 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.022 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.022 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.022 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.022 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.282 00:18:39.282 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.282 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.282 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.282 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.282 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.282 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.282 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.282 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.282 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.282 { 00:18:39.282 "cntlid": 51, 00:18:39.282 "qid": 0, 00:18:39.282 "state": "enabled", 00:18:39.282 "thread": "nvmf_tgt_poll_group_000", 00:18:39.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:39.282 "listen_address": { 00:18:39.282 "trtype": "TCP", 00:18:39.282 "adrfam": "IPv4", 00:18:39.282 "traddr": "10.0.0.2", 00:18:39.282 "trsvcid": "4420" 00:18:39.282 }, 00:18:39.282 "peer_address": { 00:18:39.282 "trtype": "TCP", 00:18:39.282 "adrfam": "IPv4", 00:18:39.282 "traddr": "10.0.0.1", 00:18:39.282 "trsvcid": "42584" 00:18:39.282 }, 00:18:39.282 "auth": { 00:18:39.282 "state": "completed", 00:18:39.282 "digest": "sha384", 00:18:39.282 "dhgroup": "null" 00:18:39.282 } 00:18:39.282 } 00:18:39.282 ]' 00:18:39.282 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.282 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.282 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.543 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:39.543 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.543 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.543 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.543 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.543 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:39.543 21:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:40.484 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.484 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:40.484 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.484 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.484 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.484 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.484 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:40.484 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:40.746 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:40.746 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.746 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:40.746 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:40.746 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:40.746 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.746 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.746 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.746 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.746 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.746 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.746 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.746 21:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.746 00:18:40.746 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.746 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.746 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.010 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.010 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.010 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.010 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.010 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.010 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.010 { 00:18:41.010 "cntlid": 53, 00:18:41.010 "qid": 0, 00:18:41.010 "state": "enabled", 00:18:41.010 "thread": "nvmf_tgt_poll_group_000", 00:18:41.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:41.010 "listen_address": { 00:18:41.010 "trtype": "TCP", 00:18:41.010 "adrfam": "IPv4", 00:18:41.010 "traddr": "10.0.0.2", 00:18:41.010 "trsvcid": "4420" 00:18:41.010 }, 00:18:41.010 "peer_address": { 00:18:41.010 "trtype": "TCP", 00:18:41.010 "adrfam": "IPv4", 00:18:41.010 "traddr": "10.0.0.1", 00:18:41.010 "trsvcid": "39606" 00:18:41.010 }, 00:18:41.010 "auth": { 00:18:41.010 "state": "completed", 00:18:41.010 "digest": "sha384", 00:18:41.010 "dhgroup": "null" 00:18:41.010 } 00:18:41.010 } 00:18:41.010 ]' 00:18:41.010 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.010 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.010 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.010 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:41.010 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.270 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.270 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.270 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.270 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:41.270 21:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:42.212 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.213 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.474 00:18:42.474 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.474 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.474 21:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.734 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.734 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.734 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.734 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.734 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.734 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.734 { 00:18:42.734 "cntlid": 55, 00:18:42.734 "qid": 0, 00:18:42.734 "state": "enabled", 00:18:42.734 "thread": "nvmf_tgt_poll_group_000", 00:18:42.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:42.734 "listen_address": { 00:18:42.734 "trtype": "TCP", 00:18:42.734 "adrfam": "IPv4", 00:18:42.734 "traddr": "10.0.0.2", 00:18:42.734 "trsvcid": "4420" 00:18:42.734 }, 00:18:42.734 "peer_address": { 00:18:42.734 "trtype": "TCP", 00:18:42.734 "adrfam": "IPv4", 00:18:42.734 "traddr": "10.0.0.1", 00:18:42.734 "trsvcid": "39638" 00:18:42.734 }, 00:18:42.734 "auth": { 00:18:42.734 "state": "completed", 00:18:42.734 "digest": "sha384", 00:18:42.734 "dhgroup": "null" 00:18:42.734 } 00:18:42.734 } 00:18:42.734 ]' 00:18:42.734 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.734 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.734 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.734 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:42.734 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.734 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.734 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.734 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.994 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:42.994 21:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.936 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.195 00:18:44.196 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.196 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.196 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.455 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.455 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.455 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.455 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.455 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.455 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.455 { 00:18:44.455 "cntlid": 57, 00:18:44.455 "qid": 0, 00:18:44.455 "state": "enabled", 00:18:44.455 "thread": "nvmf_tgt_poll_group_000", 00:18:44.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:44.455 "listen_address": { 00:18:44.455 "trtype": "TCP", 00:18:44.455 "adrfam": "IPv4", 00:18:44.455 "traddr": "10.0.0.2", 00:18:44.455 "trsvcid": "4420" 00:18:44.455 }, 00:18:44.455 "peer_address": { 00:18:44.455 "trtype": "TCP", 00:18:44.455 "adrfam": "IPv4", 00:18:44.455 "traddr": "10.0.0.1", 00:18:44.455 "trsvcid": "39678" 00:18:44.455 }, 00:18:44.455 "auth": { 00:18:44.455 "state": "completed", 00:18:44.455 "digest": "sha384", 00:18:44.455 "dhgroup": "ffdhe2048" 00:18:44.455 } 00:18:44.455 } 00:18:44.455 ]' 00:18:44.455 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.455 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.455 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.455 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:44.455 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.455 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.455 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.455 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.714 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:44.714 21:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.652 21:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.911 00:18:45.911 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.911 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.911 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.170 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.170 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.170 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.170 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.170 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.170 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.170 { 00:18:46.170 "cntlid": 59, 00:18:46.170 "qid": 0, 00:18:46.170 "state": "enabled", 00:18:46.170 "thread": "nvmf_tgt_poll_group_000", 00:18:46.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:46.170 "listen_address": { 00:18:46.170 "trtype": "TCP", 00:18:46.170 "adrfam": "IPv4", 00:18:46.170 "traddr": "10.0.0.2", 00:18:46.170 "trsvcid": "4420" 00:18:46.170 }, 00:18:46.170 "peer_address": { 00:18:46.170 "trtype": "TCP", 00:18:46.170 "adrfam": "IPv4", 00:18:46.170 "traddr": "10.0.0.1", 00:18:46.170 "trsvcid": "39696" 00:18:46.170 }, 00:18:46.170 "auth": { 00:18:46.170 "state": "completed", 00:18:46.170 "digest": "sha384", 00:18:46.170 "dhgroup": "ffdhe2048" 00:18:46.170 } 00:18:46.170 } 00:18:46.170 ]' 00:18:46.170 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.170 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.170 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.170 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:46.170 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.170 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.170 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.170 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.430 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:46.430 21:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:46.997 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.997 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.997 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.997 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.257 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.516 00:18:47.516 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.516 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.516 21:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.776 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.776 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.776 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.776 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.776 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.776 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.776 { 00:18:47.776 "cntlid": 61, 00:18:47.776 "qid": 0, 00:18:47.776 "state": "enabled", 00:18:47.776 "thread": "nvmf_tgt_poll_group_000", 00:18:47.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:47.776 "listen_address": { 00:18:47.776 "trtype": "TCP", 00:18:47.776 "adrfam": "IPv4", 00:18:47.776 "traddr": "10.0.0.2", 00:18:47.776 "trsvcid": "4420" 00:18:47.776 }, 00:18:47.776 "peer_address": { 00:18:47.776 "trtype": "TCP", 00:18:47.776 "adrfam": "IPv4", 00:18:47.776 "traddr": "10.0.0.1", 00:18:47.776 "trsvcid": "39724" 00:18:47.776 }, 00:18:47.776 "auth": { 00:18:47.776 "state": "completed", 00:18:47.776 "digest": "sha384", 00:18:47.776 "dhgroup": "ffdhe2048" 00:18:47.776 } 00:18:47.776 } 00:18:47.776 ]' 00:18:47.776 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.776 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.776 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.776 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.776 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.776 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.776 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.776 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.035 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:48.035 21:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.976 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:49.238 00:18:49.238 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.238 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.238 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.498 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.498 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.498 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.498 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.498 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.498 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.498 { 00:18:49.498 "cntlid": 63, 00:18:49.498 "qid": 0, 00:18:49.498 "state": "enabled", 00:18:49.498 "thread": "nvmf_tgt_poll_group_000", 00:18:49.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:49.498 "listen_address": { 00:18:49.498 "trtype": "TCP", 00:18:49.498 "adrfam": "IPv4", 00:18:49.498 "traddr": "10.0.0.2", 00:18:49.498 "trsvcid": "4420" 00:18:49.498 }, 00:18:49.498 "peer_address": { 00:18:49.498 "trtype": "TCP", 00:18:49.498 "adrfam": "IPv4", 00:18:49.498 "traddr": "10.0.0.1", 00:18:49.498 "trsvcid": "39762" 00:18:49.498 }, 00:18:49.498 "auth": { 00:18:49.498 "state": "completed", 00:18:49.498 "digest": "sha384", 00:18:49.498 "dhgroup": "ffdhe2048" 00:18:49.498 } 00:18:49.498 } 00:18:49.498 ]' 00:18:49.498 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.498 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.498 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.498 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:49.498 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.498 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.498 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.498 21:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.758 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:49.758 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:50.699 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.699 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:50.699 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.699 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.699 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.699 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.699 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.699 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:50.699 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:50.699 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:50.699 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.699 21:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:50.699 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:50.699 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:50.699 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.699 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.699 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.699 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.699 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.699 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.699 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.699 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.960 00:18:50.960 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.960 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.960 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.220 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.220 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.220 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.220 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.221 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.221 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.221 { 00:18:51.221 "cntlid": 65, 00:18:51.221 "qid": 0, 00:18:51.221 "state": "enabled", 00:18:51.221 "thread": "nvmf_tgt_poll_group_000", 00:18:51.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:51.221 "listen_address": { 00:18:51.221 "trtype": "TCP", 00:18:51.221 "adrfam": "IPv4", 00:18:51.221 "traddr": "10.0.0.2", 00:18:51.221 "trsvcid": "4420" 00:18:51.221 }, 00:18:51.221 "peer_address": { 00:18:51.221 "trtype": "TCP", 00:18:51.221 "adrfam": "IPv4", 00:18:51.221 "traddr": "10.0.0.1", 00:18:51.221 "trsvcid": "45590" 00:18:51.221 }, 00:18:51.221 "auth": { 00:18:51.221 "state": "completed", 00:18:51.221 "digest": "sha384", 00:18:51.221 "dhgroup": "ffdhe3072" 00:18:51.221 } 00:18:51.221 } 00:18:51.221 ]' 00:18:51.221 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.221 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.221 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.221 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:51.221 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.221 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.221 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.221 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.480 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:51.480 21:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.421 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.682 00:18:52.682 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.682 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.682 21:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.942 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.942 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.942 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.943 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.943 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.943 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.943 { 00:18:52.943 "cntlid": 67, 00:18:52.943 "qid": 0, 00:18:52.943 "state": "enabled", 00:18:52.943 "thread": "nvmf_tgt_poll_group_000", 00:18:52.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:52.943 "listen_address": { 00:18:52.943 "trtype": "TCP", 00:18:52.943 "adrfam": "IPv4", 00:18:52.943 "traddr": "10.0.0.2", 00:18:52.943 "trsvcid": "4420" 00:18:52.943 }, 00:18:52.943 "peer_address": { 00:18:52.943 "trtype": "TCP", 00:18:52.943 "adrfam": "IPv4", 00:18:52.943 "traddr": "10.0.0.1", 00:18:52.943 "trsvcid": "45614" 00:18:52.943 }, 00:18:52.943 "auth": { 00:18:52.943 "state": "completed", 00:18:52.943 "digest": "sha384", 00:18:52.943 "dhgroup": "ffdhe3072" 00:18:52.943 } 00:18:52.943 } 00:18:52.943 ]' 00:18:52.943 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.943 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.943 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.943 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:52.943 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.943 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.943 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.943 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.203 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:53.203 21:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:18:53.773 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.773 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.773 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.773 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.773 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.773 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.773 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:53.773 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:54.033 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:54.034 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.034 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:54.034 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:54.034 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:54.034 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.034 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.034 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.034 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.034 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.034 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.034 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.034 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.293 00:18:54.293 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.293 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.293 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.553 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.553 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.553 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.553 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.553 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.553 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.553 { 00:18:54.553 "cntlid": 69, 00:18:54.553 "qid": 0, 00:18:54.553 "state": "enabled", 00:18:54.553 "thread": "nvmf_tgt_poll_group_000", 00:18:54.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:54.553 "listen_address": { 00:18:54.553 "trtype": "TCP", 00:18:54.553 "adrfam": "IPv4", 00:18:54.553 "traddr": "10.0.0.2", 00:18:54.553 "trsvcid": "4420" 00:18:54.553 }, 00:18:54.553 "peer_address": { 00:18:54.553 "trtype": "TCP", 00:18:54.553 "adrfam": "IPv4", 00:18:54.553 "traddr": "10.0.0.1", 00:18:54.553 "trsvcid": "45658" 00:18:54.553 }, 00:18:54.553 "auth": { 00:18:54.553 "state": "completed", 00:18:54.553 "digest": "sha384", 00:18:54.553 "dhgroup": "ffdhe3072" 00:18:54.553 } 00:18:54.553 } 00:18:54.553 ]' 00:18:54.553 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.553 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.553 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.553 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:54.553 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.553 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.553 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.553 21:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.828 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:54.828 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:18:55.770 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.770 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:55.770 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.770 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.770 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.770 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.770 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:55.770 21:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:55.770 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:55.770 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.770 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:55.770 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:55.770 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:55.770 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.770 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:55.770 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.770 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.770 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.770 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:55.770 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:55.770 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.030 00:18:56.030 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.030 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.030 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.290 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.290 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.290 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.290 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.290 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.290 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.290 { 00:18:56.290 "cntlid": 71, 00:18:56.290 "qid": 0, 00:18:56.290 "state": "enabled", 00:18:56.290 "thread": "nvmf_tgt_poll_group_000", 00:18:56.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:56.290 "listen_address": { 00:18:56.290 "trtype": "TCP", 00:18:56.290 "adrfam": "IPv4", 00:18:56.290 "traddr": "10.0.0.2", 00:18:56.290 "trsvcid": "4420" 00:18:56.290 }, 00:18:56.290 "peer_address": { 00:18:56.291 "trtype": "TCP", 00:18:56.291 "adrfam": "IPv4", 00:18:56.291 "traddr": "10.0.0.1", 00:18:56.291 "trsvcid": "45682" 00:18:56.291 }, 00:18:56.291 "auth": { 00:18:56.291 "state": "completed", 00:18:56.291 "digest": "sha384", 00:18:56.291 "dhgroup": "ffdhe3072" 00:18:56.291 } 00:18:56.291 } 00:18:56.291 ]' 00:18:56.291 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.291 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.291 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.291 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:56.291 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.291 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.291 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.291 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.549 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:56.549 21:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.514 21:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.836 00:18:57.836 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.836 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.836 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.115 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.115 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.115 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.115 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.115 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.115 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.115 { 00:18:58.115 "cntlid": 73, 00:18:58.115 "qid": 0, 00:18:58.115 "state": "enabled", 00:18:58.115 "thread": "nvmf_tgt_poll_group_000", 00:18:58.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:58.115 "listen_address": { 00:18:58.115 "trtype": "TCP", 00:18:58.115 "adrfam": "IPv4", 00:18:58.115 "traddr": "10.0.0.2", 00:18:58.115 "trsvcid": "4420" 00:18:58.115 }, 00:18:58.115 "peer_address": { 00:18:58.115 "trtype": "TCP", 00:18:58.115 "adrfam": "IPv4", 00:18:58.115 "traddr": "10.0.0.1", 00:18:58.115 "trsvcid": "45706" 00:18:58.115 }, 00:18:58.115 "auth": { 00:18:58.115 "state": "completed", 00:18:58.115 "digest": "sha384", 00:18:58.115 "dhgroup": "ffdhe4096" 00:18:58.115 } 00:18:58.115 } 00:18:58.115 ]' 00:18:58.115 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.115 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.115 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.115 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:58.115 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.115 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.115 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.115 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.387 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:58.387 21:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:18:58.958 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.958 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:58.958 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.958 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.958 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.958 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.958 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:58.958 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:59.218 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:59.218 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.218 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:59.218 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:59.218 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:59.218 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.218 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.218 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.218 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.218 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.218 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.218 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.218 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.480 00:18:59.480 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.480 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.480 21:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.741 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.741 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.741 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.741 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.741 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.741 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.741 { 00:18:59.741 "cntlid": 75, 00:18:59.741 "qid": 0, 00:18:59.741 "state": "enabled", 00:18:59.741 "thread": "nvmf_tgt_poll_group_000", 00:18:59.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:59.741 "listen_address": { 00:18:59.741 "trtype": "TCP", 00:18:59.741 "adrfam": "IPv4", 00:18:59.741 "traddr": "10.0.0.2", 00:18:59.741 "trsvcid": "4420" 00:18:59.741 }, 00:18:59.741 "peer_address": { 00:18:59.741 "trtype": "TCP", 00:18:59.741 "adrfam": "IPv4", 00:18:59.741 "traddr": "10.0.0.1", 00:18:59.741 "trsvcid": "45720" 00:18:59.741 }, 00:18:59.741 "auth": { 00:18:59.741 "state": "completed", 00:18:59.741 "digest": "sha384", 00:18:59.741 "dhgroup": "ffdhe4096" 00:18:59.741 } 00:18:59.741 } 00:18:59.741 ]' 00:18:59.741 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.741 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.741 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.741 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:59.741 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.741 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.741 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.741 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.000 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:00.000 21:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:00.939 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.940 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.200 00:19:01.200 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.200 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.200 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.461 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.461 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.461 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.461 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.461 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.461 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.461 { 00:19:01.461 "cntlid": 77, 00:19:01.461 "qid": 0, 00:19:01.461 "state": "enabled", 00:19:01.461 "thread": "nvmf_tgt_poll_group_000", 00:19:01.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:01.461 "listen_address": { 00:19:01.461 "trtype": "TCP", 00:19:01.461 "adrfam": "IPv4", 00:19:01.461 "traddr": "10.0.0.2", 00:19:01.461 "trsvcid": "4420" 00:19:01.461 }, 00:19:01.461 "peer_address": { 00:19:01.461 "trtype": "TCP", 00:19:01.461 "adrfam": "IPv4", 00:19:01.461 "traddr": "10.0.0.1", 00:19:01.461 "trsvcid": "44780" 00:19:01.461 }, 00:19:01.461 "auth": { 00:19:01.461 "state": "completed", 00:19:01.461 "digest": "sha384", 00:19:01.461 "dhgroup": "ffdhe4096" 00:19:01.461 } 00:19:01.461 } 00:19:01.461 ]' 00:19:01.461 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.461 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.461 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.461 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:01.461 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.461 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.461 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.461 21:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.722 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:01.722 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.662 21:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.922 00:19:02.923 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.923 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.923 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.183 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.183 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.183 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.183 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.183 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.183 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.183 { 00:19:03.183 "cntlid": 79, 00:19:03.183 "qid": 0, 00:19:03.183 "state": "enabled", 00:19:03.183 "thread": "nvmf_tgt_poll_group_000", 00:19:03.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:03.183 "listen_address": { 00:19:03.183 "trtype": "TCP", 00:19:03.183 "adrfam": "IPv4", 00:19:03.183 "traddr": "10.0.0.2", 00:19:03.183 "trsvcid": "4420" 00:19:03.183 }, 00:19:03.183 "peer_address": { 00:19:03.183 "trtype": "TCP", 00:19:03.183 "adrfam": "IPv4", 00:19:03.183 "traddr": "10.0.0.1", 00:19:03.183 "trsvcid": "44812" 00:19:03.183 }, 00:19:03.183 "auth": { 00:19:03.183 "state": "completed", 00:19:03.183 "digest": "sha384", 00:19:03.183 "dhgroup": "ffdhe4096" 00:19:03.183 } 00:19:03.183 } 00:19:03.183 ]' 00:19:03.183 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.183 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.183 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.183 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:03.183 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.183 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.183 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.183 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.442 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:03.442 21:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.379 21:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.639 00:19:04.900 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.900 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.900 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.900 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.900 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.900 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.900 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.900 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.900 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.900 { 00:19:04.900 "cntlid": 81, 00:19:04.900 "qid": 0, 00:19:04.900 "state": "enabled", 00:19:04.900 "thread": "nvmf_tgt_poll_group_000", 00:19:04.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:04.900 "listen_address": { 00:19:04.900 "trtype": "TCP", 00:19:04.900 "adrfam": "IPv4", 00:19:04.900 "traddr": "10.0.0.2", 00:19:04.900 "trsvcid": "4420" 00:19:04.900 }, 00:19:04.900 "peer_address": { 00:19:04.900 "trtype": "TCP", 00:19:04.900 "adrfam": "IPv4", 00:19:04.900 "traddr": "10.0.0.1", 00:19:04.900 "trsvcid": "44840" 00:19:04.900 }, 00:19:04.900 "auth": { 00:19:04.900 "state": "completed", 00:19:04.900 "digest": "sha384", 00:19:04.900 "dhgroup": "ffdhe6144" 00:19:04.900 } 00:19:04.900 } 00:19:04.900 ]' 00:19:04.900 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.900 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.900 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.159 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:05.159 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.159 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.159 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.159 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.160 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:05.160 21:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:06.097 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.097 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:06.097 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.097 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.097 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.097 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.097 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:06.097 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:06.356 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:06.356 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.356 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:06.356 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:06.356 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:06.356 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.356 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.356 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.356 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.356 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.356 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.356 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.356 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.616 00:19:06.616 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.616 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.616 21:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.877 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.877 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.877 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.877 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.877 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.877 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.877 { 00:19:06.877 "cntlid": 83, 00:19:06.877 "qid": 0, 00:19:06.877 "state": "enabled", 00:19:06.877 "thread": "nvmf_tgt_poll_group_000", 00:19:06.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:06.877 "listen_address": { 00:19:06.877 "trtype": "TCP", 00:19:06.877 "adrfam": "IPv4", 00:19:06.877 "traddr": "10.0.0.2", 00:19:06.877 "trsvcid": "4420" 00:19:06.877 }, 00:19:06.877 "peer_address": { 00:19:06.877 "trtype": "TCP", 00:19:06.877 "adrfam": "IPv4", 00:19:06.877 "traddr": "10.0.0.1", 00:19:06.877 "trsvcid": "44876" 00:19:06.877 }, 00:19:06.877 "auth": { 00:19:06.877 "state": "completed", 00:19:06.877 "digest": "sha384", 00:19:06.877 "dhgroup": "ffdhe6144" 00:19:06.877 } 00:19:06.877 } 00:19:06.877 ]' 00:19:06.877 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.877 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.877 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.877 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:06.877 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.877 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.877 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.877 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.138 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:07.138 21:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:08.077 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.077 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:08.077 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.077 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.077 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.077 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.077 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:08.077 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:08.077 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:08.077 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.077 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:08.077 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:08.077 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:08.077 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.078 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.078 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.078 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.078 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.078 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.078 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.078 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.338 00:19:08.338 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.338 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.338 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.600 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.600 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.600 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.600 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.601 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.601 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.601 { 00:19:08.601 "cntlid": 85, 00:19:08.601 "qid": 0, 00:19:08.601 "state": "enabled", 00:19:08.601 "thread": "nvmf_tgt_poll_group_000", 00:19:08.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:08.601 "listen_address": { 00:19:08.601 "trtype": "TCP", 00:19:08.601 "adrfam": "IPv4", 00:19:08.601 "traddr": "10.0.0.2", 00:19:08.601 "trsvcid": "4420" 00:19:08.601 }, 00:19:08.601 "peer_address": { 00:19:08.601 "trtype": "TCP", 00:19:08.601 "adrfam": "IPv4", 00:19:08.601 "traddr": "10.0.0.1", 00:19:08.601 "trsvcid": "44906" 00:19:08.601 }, 00:19:08.601 "auth": { 00:19:08.601 "state": "completed", 00:19:08.601 "digest": "sha384", 00:19:08.601 "dhgroup": "ffdhe6144" 00:19:08.601 } 00:19:08.601 } 00:19:08.601 ]' 00:19:08.601 21:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.601 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.601 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.862 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:08.862 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.862 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.862 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.862 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.862 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:08.862 21:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:09.803 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.374 00:19:10.374 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.374 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.374 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.374 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.374 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.374 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.374 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.374 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.374 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.374 { 00:19:10.374 "cntlid": 87, 00:19:10.374 "qid": 0, 00:19:10.374 "state": "enabled", 00:19:10.374 "thread": "nvmf_tgt_poll_group_000", 00:19:10.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:10.374 "listen_address": { 00:19:10.374 "trtype": "TCP", 00:19:10.374 "adrfam": "IPv4", 00:19:10.374 "traddr": "10.0.0.2", 00:19:10.374 "trsvcid": "4420" 00:19:10.374 }, 00:19:10.374 "peer_address": { 00:19:10.374 "trtype": "TCP", 00:19:10.374 "adrfam": "IPv4", 00:19:10.374 "traddr": "10.0.0.1", 00:19:10.374 "trsvcid": "44938" 00:19:10.374 }, 00:19:10.374 "auth": { 00:19:10.374 "state": "completed", 00:19:10.374 "digest": "sha384", 00:19:10.374 "dhgroup": "ffdhe6144" 00:19:10.374 } 00:19:10.374 } 00:19:10.374 ]' 00:19:10.374 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.374 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.374 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.634 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:10.634 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.634 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.634 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.634 21:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.894 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:10.894 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:11.465 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.466 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:11.466 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.466 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.466 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.466 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.466 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.466 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.466 21:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.725 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:11.725 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.725 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:11.725 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:11.725 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:11.725 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.725 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.725 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.726 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.726 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.726 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.726 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.726 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.295 00:19:12.295 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.295 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.295 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.556 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.556 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.556 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.556 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.556 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.556 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.556 { 00:19:12.556 "cntlid": 89, 00:19:12.556 "qid": 0, 00:19:12.556 "state": "enabled", 00:19:12.556 "thread": "nvmf_tgt_poll_group_000", 00:19:12.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:12.556 "listen_address": { 00:19:12.556 "trtype": "TCP", 00:19:12.556 "adrfam": "IPv4", 00:19:12.556 "traddr": "10.0.0.2", 00:19:12.556 "trsvcid": "4420" 00:19:12.556 }, 00:19:12.556 "peer_address": { 00:19:12.556 "trtype": "TCP", 00:19:12.556 "adrfam": "IPv4", 00:19:12.556 "traddr": "10.0.0.1", 00:19:12.556 "trsvcid": "40264" 00:19:12.556 }, 00:19:12.556 "auth": { 00:19:12.556 "state": "completed", 00:19:12.556 "digest": "sha384", 00:19:12.556 "dhgroup": "ffdhe8192" 00:19:12.556 } 00:19:12.556 } 00:19:12.556 ]' 00:19:12.556 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.556 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.556 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.556 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:12.556 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.556 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.556 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.556 21:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.817 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:12.817 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:13.388 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.649 21:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.649 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.649 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.649 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.649 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.220 00:19:14.220 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.220 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.220 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.480 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.481 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.481 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.481 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.481 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.481 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.481 { 00:19:14.481 "cntlid": 91, 00:19:14.481 "qid": 0, 00:19:14.481 "state": "enabled", 00:19:14.481 "thread": "nvmf_tgt_poll_group_000", 00:19:14.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:14.481 "listen_address": { 00:19:14.481 "trtype": "TCP", 00:19:14.481 "adrfam": "IPv4", 00:19:14.481 "traddr": "10.0.0.2", 00:19:14.481 "trsvcid": "4420" 00:19:14.481 }, 00:19:14.481 "peer_address": { 00:19:14.481 "trtype": "TCP", 00:19:14.481 "adrfam": "IPv4", 00:19:14.481 "traddr": "10.0.0.1", 00:19:14.481 "trsvcid": "40294" 00:19:14.481 }, 00:19:14.481 "auth": { 00:19:14.481 "state": "completed", 00:19:14.481 "digest": "sha384", 00:19:14.481 "dhgroup": "ffdhe8192" 00:19:14.481 } 00:19:14.481 } 00:19:14.481 ]' 00:19:14.481 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.481 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:14.481 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.481 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:14.481 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.481 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.481 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.481 21:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.741 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:14.741 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.683 21:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.254 00:19:16.254 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.254 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.254 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.254 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.514 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.514 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.514 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.514 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.514 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.514 { 00:19:16.514 "cntlid": 93, 00:19:16.514 "qid": 0, 00:19:16.514 "state": "enabled", 00:19:16.514 "thread": "nvmf_tgt_poll_group_000", 00:19:16.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:16.514 "listen_address": { 00:19:16.514 "trtype": "TCP", 00:19:16.514 "adrfam": "IPv4", 00:19:16.514 "traddr": "10.0.0.2", 00:19:16.514 "trsvcid": "4420" 00:19:16.514 }, 00:19:16.514 "peer_address": { 00:19:16.514 "trtype": "TCP", 00:19:16.514 "adrfam": "IPv4", 00:19:16.514 "traddr": "10.0.0.1", 00:19:16.514 "trsvcid": "40320" 00:19:16.514 }, 00:19:16.514 "auth": { 00:19:16.514 "state": "completed", 00:19:16.514 "digest": "sha384", 00:19:16.514 "dhgroup": "ffdhe8192" 00:19:16.514 } 00:19:16.514 } 00:19:16.514 ]' 00:19:16.514 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.514 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.514 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.514 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.514 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.514 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.515 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.515 21:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.776 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:16.777 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:17.349 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.610 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:17.610 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.610 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.610 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.610 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.610 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:17.610 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:17.610 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:17.610 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.610 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:17.610 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:17.610 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:17.610 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.610 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:17.610 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.611 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.611 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.611 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:17.611 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.611 21:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:18.182 00:19:18.182 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.182 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.182 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.443 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.443 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.443 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.443 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.443 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.443 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.443 { 00:19:18.443 "cntlid": 95, 00:19:18.443 "qid": 0, 00:19:18.443 "state": "enabled", 00:19:18.443 "thread": "nvmf_tgt_poll_group_000", 00:19:18.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:18.443 "listen_address": { 00:19:18.443 "trtype": "TCP", 00:19:18.443 "adrfam": "IPv4", 00:19:18.443 "traddr": "10.0.0.2", 00:19:18.443 "trsvcid": "4420" 00:19:18.443 }, 00:19:18.443 "peer_address": { 00:19:18.443 "trtype": "TCP", 00:19:18.443 "adrfam": "IPv4", 00:19:18.443 "traddr": "10.0.0.1", 00:19:18.443 "trsvcid": "40356" 00:19:18.443 }, 00:19:18.443 "auth": { 00:19:18.443 "state": "completed", 00:19:18.443 "digest": "sha384", 00:19:18.443 "dhgroup": "ffdhe8192" 00:19:18.443 } 00:19:18.443 } 00:19:18.443 ]' 00:19:18.443 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.443 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.443 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.443 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.443 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.443 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.443 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.443 21:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.703 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:18.703 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:19.644 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.645 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.645 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.645 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.645 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.645 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.645 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.645 21:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.907 00:19:19.907 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.907 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.907 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.167 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.167 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.167 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.167 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.167 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.167 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.167 { 00:19:20.167 "cntlid": 97, 00:19:20.167 "qid": 0, 00:19:20.167 "state": "enabled", 00:19:20.167 "thread": "nvmf_tgt_poll_group_000", 00:19:20.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:20.167 "listen_address": { 00:19:20.167 "trtype": "TCP", 00:19:20.167 "adrfam": "IPv4", 00:19:20.167 "traddr": "10.0.0.2", 00:19:20.167 "trsvcid": "4420" 00:19:20.167 }, 00:19:20.167 "peer_address": { 00:19:20.167 "trtype": "TCP", 00:19:20.167 "adrfam": "IPv4", 00:19:20.167 "traddr": "10.0.0.1", 00:19:20.167 "trsvcid": "40376" 00:19:20.167 }, 00:19:20.167 "auth": { 00:19:20.167 "state": "completed", 00:19:20.167 "digest": "sha512", 00:19:20.167 "dhgroup": "null" 00:19:20.167 } 00:19:20.167 } 00:19:20.167 ]' 00:19:20.167 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.167 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.167 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.167 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:20.167 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.167 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.167 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.168 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.428 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:20.428 21:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:21.004 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.004 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:21.004 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.004 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.265 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.526 00:19:21.526 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.526 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.526 21:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.787 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.787 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.787 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.787 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.787 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.787 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.787 { 00:19:21.787 "cntlid": 99, 00:19:21.787 "qid": 0, 00:19:21.787 "state": "enabled", 00:19:21.787 "thread": "nvmf_tgt_poll_group_000", 00:19:21.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:21.787 "listen_address": { 00:19:21.787 "trtype": "TCP", 00:19:21.787 "adrfam": "IPv4", 00:19:21.787 "traddr": "10.0.0.2", 00:19:21.787 "trsvcid": "4420" 00:19:21.787 }, 00:19:21.787 "peer_address": { 00:19:21.787 "trtype": "TCP", 00:19:21.787 "adrfam": "IPv4", 00:19:21.787 "traddr": "10.0.0.1", 00:19:21.787 "trsvcid": "35640" 00:19:21.787 }, 00:19:21.787 "auth": { 00:19:21.787 "state": "completed", 00:19:21.787 "digest": "sha512", 00:19:21.787 "dhgroup": "null" 00:19:21.787 } 00:19:21.787 } 00:19:21.787 ]' 00:19:21.787 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.787 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.787 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.787 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:21.787 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.787 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.787 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.787 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.060 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:22.060 21:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:23.002 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.002 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:23.002 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.002 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.002 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.002 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.002 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:23.002 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:23.002 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:23.002 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.002 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:23.003 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:23.003 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:23.003 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.003 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.003 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.003 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.003 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.003 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.003 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.003 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.261 00:19:23.261 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.261 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.261 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.521 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.521 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.521 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.521 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.521 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.521 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.521 { 00:19:23.521 "cntlid": 101, 00:19:23.521 "qid": 0, 00:19:23.521 "state": "enabled", 00:19:23.521 "thread": "nvmf_tgt_poll_group_000", 00:19:23.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:23.521 "listen_address": { 00:19:23.521 "trtype": "TCP", 00:19:23.521 "adrfam": "IPv4", 00:19:23.521 "traddr": "10.0.0.2", 00:19:23.521 "trsvcid": "4420" 00:19:23.521 }, 00:19:23.521 "peer_address": { 00:19:23.521 "trtype": "TCP", 00:19:23.521 "adrfam": "IPv4", 00:19:23.521 "traddr": "10.0.0.1", 00:19:23.521 "trsvcid": "35668" 00:19:23.521 }, 00:19:23.521 "auth": { 00:19:23.521 "state": "completed", 00:19:23.521 "digest": "sha512", 00:19:23.521 "dhgroup": "null" 00:19:23.521 } 00:19:23.521 } 00:19:23.521 ]' 00:19:23.521 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.521 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.521 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.521 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:23.521 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.521 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.521 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.521 21:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.780 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:23.780 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:24.348 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.348 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:24.348 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.348 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.348 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.348 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.348 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:24.348 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:24.608 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:24.608 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.608 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:24.608 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:24.608 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:24.608 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.608 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:24.608 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.608 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.608 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.608 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:24.608 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.608 21:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.867 00:19:24.867 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.867 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.867 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.867 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.867 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.867 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.867 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.127 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.127 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.127 { 00:19:25.127 "cntlid": 103, 00:19:25.127 "qid": 0, 00:19:25.127 "state": "enabled", 00:19:25.127 "thread": "nvmf_tgt_poll_group_000", 00:19:25.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:25.127 "listen_address": { 00:19:25.127 "trtype": "TCP", 00:19:25.127 "adrfam": "IPv4", 00:19:25.127 "traddr": "10.0.0.2", 00:19:25.127 "trsvcid": "4420" 00:19:25.127 }, 00:19:25.127 "peer_address": { 00:19:25.127 "trtype": "TCP", 00:19:25.127 "adrfam": "IPv4", 00:19:25.127 "traddr": "10.0.0.1", 00:19:25.127 "trsvcid": "35698" 00:19:25.127 }, 00:19:25.127 "auth": { 00:19:25.127 "state": "completed", 00:19:25.127 "digest": "sha512", 00:19:25.127 "dhgroup": "null" 00:19:25.127 } 00:19:25.127 } 00:19:25.127 ]' 00:19:25.127 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.127 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.127 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.127 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:25.127 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.127 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.127 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.127 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.387 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:25.387 21:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:25.957 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.957 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:25.957 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.957 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.957 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.957 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.957 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.957 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:25.957 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:26.218 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:26.218 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.218 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:26.218 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:26.218 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:26.218 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.218 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.218 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.218 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.218 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.218 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.218 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.218 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.478 00:19:26.478 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.478 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.478 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.738 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.739 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.739 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.739 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.739 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.739 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.739 { 00:19:26.739 "cntlid": 105, 00:19:26.739 "qid": 0, 00:19:26.739 "state": "enabled", 00:19:26.739 "thread": "nvmf_tgt_poll_group_000", 00:19:26.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:26.739 "listen_address": { 00:19:26.739 "trtype": "TCP", 00:19:26.739 "adrfam": "IPv4", 00:19:26.739 "traddr": "10.0.0.2", 00:19:26.739 "trsvcid": "4420" 00:19:26.739 }, 00:19:26.739 "peer_address": { 00:19:26.739 "trtype": "TCP", 00:19:26.739 "adrfam": "IPv4", 00:19:26.739 "traddr": "10.0.0.1", 00:19:26.739 "trsvcid": "35732" 00:19:26.739 }, 00:19:26.739 "auth": { 00:19:26.739 "state": "completed", 00:19:26.739 "digest": "sha512", 00:19:26.739 "dhgroup": "ffdhe2048" 00:19:26.739 } 00:19:26.739 } 00:19:26.739 ]' 00:19:26.739 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.739 21:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.739 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.739 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:26.739 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.739 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.739 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.739 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.999 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:26.999 21:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:27.943 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.943 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:27.943 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.943 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.943 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.943 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.943 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:27.943 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:27.944 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:27.944 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.944 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:27.944 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:27.944 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:27.944 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.944 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.944 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.944 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.944 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.944 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.944 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.944 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.209 00:19:28.209 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.209 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.209 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.472 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.472 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.472 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.472 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.472 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.472 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.472 { 00:19:28.472 "cntlid": 107, 00:19:28.472 "qid": 0, 00:19:28.472 "state": "enabled", 00:19:28.472 "thread": "nvmf_tgt_poll_group_000", 00:19:28.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:28.472 "listen_address": { 00:19:28.472 "trtype": "TCP", 00:19:28.472 "adrfam": "IPv4", 00:19:28.472 "traddr": "10.0.0.2", 00:19:28.472 "trsvcid": "4420" 00:19:28.472 }, 00:19:28.472 "peer_address": { 00:19:28.472 "trtype": "TCP", 00:19:28.472 "adrfam": "IPv4", 00:19:28.472 "traddr": "10.0.0.1", 00:19:28.472 "trsvcid": "35770" 00:19:28.472 }, 00:19:28.472 "auth": { 00:19:28.472 "state": "completed", 00:19:28.472 "digest": "sha512", 00:19:28.472 "dhgroup": "ffdhe2048" 00:19:28.472 } 00:19:28.472 } 00:19:28.472 ]' 00:19:28.472 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.472 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.472 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.472 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:28.472 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.472 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.472 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.472 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.731 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:28.732 21:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:29.671 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.672 21:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.933 00:19:29.933 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.933 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.933 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.193 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.193 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.193 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.193 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.193 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.193 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.193 { 00:19:30.193 "cntlid": 109, 00:19:30.193 "qid": 0, 00:19:30.193 "state": "enabled", 00:19:30.193 "thread": "nvmf_tgt_poll_group_000", 00:19:30.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:30.194 "listen_address": { 00:19:30.194 "trtype": "TCP", 00:19:30.194 "adrfam": "IPv4", 00:19:30.194 "traddr": "10.0.0.2", 00:19:30.194 "trsvcid": "4420" 00:19:30.194 }, 00:19:30.194 "peer_address": { 00:19:30.194 "trtype": "TCP", 00:19:30.194 "adrfam": "IPv4", 00:19:30.194 "traddr": "10.0.0.1", 00:19:30.194 "trsvcid": "35792" 00:19:30.194 }, 00:19:30.194 "auth": { 00:19:30.194 "state": "completed", 00:19:30.194 "digest": "sha512", 00:19:30.194 "dhgroup": "ffdhe2048" 00:19:30.194 } 00:19:30.194 } 00:19:30.194 ]' 00:19:30.194 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.194 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.194 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.194 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:30.194 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.194 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.194 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.194 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.454 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:30.454 21:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:31.394 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:31.654 00:19:31.654 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.654 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.654 21:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.914 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.914 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.914 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.914 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.914 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.914 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.914 { 00:19:31.914 "cntlid": 111, 00:19:31.914 "qid": 0, 00:19:31.914 "state": "enabled", 00:19:31.914 "thread": "nvmf_tgt_poll_group_000", 00:19:31.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:31.914 "listen_address": { 00:19:31.914 "trtype": "TCP", 00:19:31.914 "adrfam": "IPv4", 00:19:31.914 "traddr": "10.0.0.2", 00:19:31.914 "trsvcid": "4420" 00:19:31.914 }, 00:19:31.914 "peer_address": { 00:19:31.914 "trtype": "TCP", 00:19:31.914 "adrfam": "IPv4", 00:19:31.914 "traddr": "10.0.0.1", 00:19:31.914 "trsvcid": "35822" 00:19:31.914 }, 00:19:31.914 "auth": { 00:19:31.914 "state": "completed", 00:19:31.914 "digest": "sha512", 00:19:31.914 "dhgroup": "ffdhe2048" 00:19:31.914 } 00:19:31.914 } 00:19:31.914 ]' 00:19:31.914 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.914 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.914 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.914 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.914 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.914 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.914 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.914 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.174 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:32.174 21:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.115 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.375 00:19:33.375 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.375 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.375 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.635 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.635 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.635 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.635 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.635 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.635 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.635 { 00:19:33.635 "cntlid": 113, 00:19:33.635 "qid": 0, 00:19:33.635 "state": "enabled", 00:19:33.635 "thread": "nvmf_tgt_poll_group_000", 00:19:33.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:33.635 "listen_address": { 00:19:33.635 "trtype": "TCP", 00:19:33.635 "adrfam": "IPv4", 00:19:33.635 "traddr": "10.0.0.2", 00:19:33.635 "trsvcid": "4420" 00:19:33.635 }, 00:19:33.635 "peer_address": { 00:19:33.635 "trtype": "TCP", 00:19:33.635 "adrfam": "IPv4", 00:19:33.635 "traddr": "10.0.0.1", 00:19:33.635 "trsvcid": "35850" 00:19:33.635 }, 00:19:33.635 "auth": { 00:19:33.635 "state": "completed", 00:19:33.635 "digest": "sha512", 00:19:33.635 "dhgroup": "ffdhe3072" 00:19:33.635 } 00:19:33.635 } 00:19:33.635 ]' 00:19:33.635 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.635 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.635 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.635 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:33.635 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.635 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.635 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.635 21:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.894 21:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:33.894 21:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:34.833 21:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.833 21:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:34.833 21:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.833 21:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.833 21:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.833 21:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.833 21:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:34.833 21:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:34.833 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:34.833 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.833 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:34.833 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:34.833 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:34.833 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.833 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.833 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.833 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.833 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.833 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.833 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.833 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.092 00:19:35.092 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.092 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.092 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.352 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.352 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.352 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.352 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.352 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.352 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.352 { 00:19:35.352 "cntlid": 115, 00:19:35.352 "qid": 0, 00:19:35.352 "state": "enabled", 00:19:35.352 "thread": "nvmf_tgt_poll_group_000", 00:19:35.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:35.352 "listen_address": { 00:19:35.352 "trtype": "TCP", 00:19:35.352 "adrfam": "IPv4", 00:19:35.352 "traddr": "10.0.0.2", 00:19:35.352 "trsvcid": "4420" 00:19:35.352 }, 00:19:35.352 "peer_address": { 00:19:35.352 "trtype": "TCP", 00:19:35.352 "adrfam": "IPv4", 00:19:35.352 "traddr": "10.0.0.1", 00:19:35.352 "trsvcid": "35894" 00:19:35.352 }, 00:19:35.352 "auth": { 00:19:35.352 "state": "completed", 00:19:35.352 "digest": "sha512", 00:19:35.352 "dhgroup": "ffdhe3072" 00:19:35.352 } 00:19:35.352 } 00:19:35.352 ]' 00:19:35.352 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.352 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.352 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.352 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:35.352 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.352 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.352 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.352 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.611 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:35.611 21:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:36.181 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.440 21:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.699 00:19:36.699 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.699 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.699 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.966 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.966 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.966 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.966 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.966 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.966 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.966 { 00:19:36.966 "cntlid": 117, 00:19:36.966 "qid": 0, 00:19:36.966 "state": "enabled", 00:19:36.966 "thread": "nvmf_tgt_poll_group_000", 00:19:36.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:36.966 "listen_address": { 00:19:36.966 "trtype": "TCP", 00:19:36.966 "adrfam": "IPv4", 00:19:36.966 "traddr": "10.0.0.2", 00:19:36.966 "trsvcid": "4420" 00:19:36.966 }, 00:19:36.966 "peer_address": { 00:19:36.966 "trtype": "TCP", 00:19:36.966 "adrfam": "IPv4", 00:19:36.966 "traddr": "10.0.0.1", 00:19:36.966 "trsvcid": "35916" 00:19:36.966 }, 00:19:36.966 "auth": { 00:19:36.966 "state": "completed", 00:19:36.966 "digest": "sha512", 00:19:36.966 "dhgroup": "ffdhe3072" 00:19:36.966 } 00:19:36.966 } 00:19:36.966 ]' 00:19:36.966 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.966 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.966 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.966 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.966 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.261 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.261 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.261 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.261 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:37.261 21:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:38.257 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.257 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:38.257 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.257 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.257 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.257 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.257 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:38.257 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:38.257 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:38.257 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.257 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:38.257 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:38.257 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:38.257 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.258 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:38.258 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.258 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.258 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.258 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:38.258 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.258 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.518 00:19:38.518 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.518 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.518 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.777 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.778 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.778 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.778 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.778 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.778 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.778 { 00:19:38.778 "cntlid": 119, 00:19:38.778 "qid": 0, 00:19:38.778 "state": "enabled", 00:19:38.778 "thread": "nvmf_tgt_poll_group_000", 00:19:38.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:38.778 "listen_address": { 00:19:38.778 "trtype": "TCP", 00:19:38.778 "adrfam": "IPv4", 00:19:38.778 "traddr": "10.0.0.2", 00:19:38.778 "trsvcid": "4420" 00:19:38.778 }, 00:19:38.778 "peer_address": { 00:19:38.778 "trtype": "TCP", 00:19:38.778 "adrfam": "IPv4", 00:19:38.778 "traddr": "10.0.0.1", 00:19:38.778 "trsvcid": "35948" 00:19:38.778 }, 00:19:38.778 "auth": { 00:19:38.778 "state": "completed", 00:19:38.778 "digest": "sha512", 00:19:38.778 "dhgroup": "ffdhe3072" 00:19:38.778 } 00:19:38.778 } 00:19:38.778 ]' 00:19:38.778 21:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.778 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.778 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.778 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.778 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.778 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.778 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.778 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.037 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:39.037 21:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:39.608 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.608 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:39.608 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.608 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.869 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.129 00:19:40.129 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.129 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.130 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.389 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.389 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.389 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.389 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.389 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.389 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.389 { 00:19:40.389 "cntlid": 121, 00:19:40.389 "qid": 0, 00:19:40.389 "state": "enabled", 00:19:40.389 "thread": "nvmf_tgt_poll_group_000", 00:19:40.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:40.389 "listen_address": { 00:19:40.389 "trtype": "TCP", 00:19:40.389 "adrfam": "IPv4", 00:19:40.389 "traddr": "10.0.0.2", 00:19:40.389 "trsvcid": "4420" 00:19:40.389 }, 00:19:40.389 "peer_address": { 00:19:40.389 "trtype": "TCP", 00:19:40.389 "adrfam": "IPv4", 00:19:40.389 "traddr": "10.0.0.1", 00:19:40.389 "trsvcid": "35970" 00:19:40.389 }, 00:19:40.389 "auth": { 00:19:40.389 "state": "completed", 00:19:40.389 "digest": "sha512", 00:19:40.389 "dhgroup": "ffdhe4096" 00:19:40.389 } 00:19:40.389 } 00:19:40.389 ]' 00:19:40.389 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.389 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.389 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.389 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.389 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.650 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.650 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.650 21:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.650 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:40.650 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.593 21:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.854 00:19:41.854 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.854 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.854 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.115 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.115 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.115 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.115 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.115 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.115 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.115 { 00:19:42.115 "cntlid": 123, 00:19:42.115 "qid": 0, 00:19:42.115 "state": "enabled", 00:19:42.115 "thread": "nvmf_tgt_poll_group_000", 00:19:42.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:42.115 "listen_address": { 00:19:42.115 "trtype": "TCP", 00:19:42.115 "adrfam": "IPv4", 00:19:42.115 "traddr": "10.0.0.2", 00:19:42.115 "trsvcid": "4420" 00:19:42.115 }, 00:19:42.115 "peer_address": { 00:19:42.115 "trtype": "TCP", 00:19:42.115 "adrfam": "IPv4", 00:19:42.115 "traddr": "10.0.0.1", 00:19:42.115 "trsvcid": "36276" 00:19:42.115 }, 00:19:42.115 "auth": { 00:19:42.115 "state": "completed", 00:19:42.115 "digest": "sha512", 00:19:42.115 "dhgroup": "ffdhe4096" 00:19:42.115 } 00:19:42.115 } 00:19:42.116 ]' 00:19:42.116 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.116 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.116 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.116 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.116 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.116 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.116 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.116 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.376 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:42.376 21:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.316 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.577 00:19:43.577 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.577 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.577 21:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.839 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.839 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.839 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.839 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.839 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.839 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.839 { 00:19:43.839 "cntlid": 125, 00:19:43.839 "qid": 0, 00:19:43.839 "state": "enabled", 00:19:43.840 "thread": "nvmf_tgt_poll_group_000", 00:19:43.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:43.840 "listen_address": { 00:19:43.840 "trtype": "TCP", 00:19:43.840 "adrfam": "IPv4", 00:19:43.840 "traddr": "10.0.0.2", 00:19:43.840 "trsvcid": "4420" 00:19:43.840 }, 00:19:43.840 "peer_address": { 00:19:43.840 "trtype": "TCP", 00:19:43.840 "adrfam": "IPv4", 00:19:43.840 "traddr": "10.0.0.1", 00:19:43.840 "trsvcid": "36298" 00:19:43.840 }, 00:19:43.840 "auth": { 00:19:43.840 "state": "completed", 00:19:43.840 "digest": "sha512", 00:19:43.840 "dhgroup": "ffdhe4096" 00:19:43.840 } 00:19:43.840 } 00:19:43.840 ]' 00:19:43.840 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.840 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.840 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.840 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:43.840 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.840 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.840 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.841 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.106 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:44.106 21:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:45.047 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:45.307 00:19:45.307 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.307 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.308 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.568 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.568 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.568 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.568 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.568 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.568 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.568 { 00:19:45.568 "cntlid": 127, 00:19:45.568 "qid": 0, 00:19:45.568 "state": "enabled", 00:19:45.568 "thread": "nvmf_tgt_poll_group_000", 00:19:45.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:45.568 "listen_address": { 00:19:45.568 "trtype": "TCP", 00:19:45.568 "adrfam": "IPv4", 00:19:45.568 "traddr": "10.0.0.2", 00:19:45.568 "trsvcid": "4420" 00:19:45.568 }, 00:19:45.568 "peer_address": { 00:19:45.568 "trtype": "TCP", 00:19:45.568 "adrfam": "IPv4", 00:19:45.568 "traddr": "10.0.0.1", 00:19:45.568 "trsvcid": "36320" 00:19:45.568 }, 00:19:45.568 "auth": { 00:19:45.568 "state": "completed", 00:19:45.568 "digest": "sha512", 00:19:45.568 "dhgroup": "ffdhe4096" 00:19:45.568 } 00:19:45.568 } 00:19:45.568 ]' 00:19:45.568 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.568 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.568 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.568 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.568 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.568 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.568 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.568 21:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.828 21:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:45.828 21:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:46.769 21:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.769 21:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:46.769 21:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.769 21:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.769 21:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.769 21:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.769 21:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.769 21:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:46.769 21:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:46.769 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:46.769 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.769 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:46.769 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:46.769 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:46.769 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.769 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.769 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.769 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.769 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.769 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.769 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.769 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.028 00:19:47.028 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.028 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.028 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.289 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.289 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.289 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.289 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.289 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.289 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.289 { 00:19:47.289 "cntlid": 129, 00:19:47.289 "qid": 0, 00:19:47.289 "state": "enabled", 00:19:47.289 "thread": "nvmf_tgt_poll_group_000", 00:19:47.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:47.289 "listen_address": { 00:19:47.289 "trtype": "TCP", 00:19:47.289 "adrfam": "IPv4", 00:19:47.289 "traddr": "10.0.0.2", 00:19:47.289 "trsvcid": "4420" 00:19:47.289 }, 00:19:47.289 "peer_address": { 00:19:47.289 "trtype": "TCP", 00:19:47.289 "adrfam": "IPv4", 00:19:47.289 "traddr": "10.0.0.1", 00:19:47.289 "trsvcid": "36356" 00:19:47.289 }, 00:19:47.289 "auth": { 00:19:47.289 "state": "completed", 00:19:47.289 "digest": "sha512", 00:19:47.289 "dhgroup": "ffdhe6144" 00:19:47.289 } 00:19:47.289 } 00:19:47.289 ]' 00:19:47.289 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.289 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.289 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.548 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.548 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.548 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.548 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.548 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.548 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:47.548 21:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:48.487 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.487 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:48.487 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.487 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.487 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.487 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.487 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:48.487 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:48.487 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:48.487 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.487 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:48.487 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:48.487 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:48.487 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.487 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.488 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.488 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.488 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.488 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.488 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.488 21:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.058 00:19:49.058 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.058 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.058 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.058 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.058 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.058 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.058 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.058 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.058 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.058 { 00:19:49.058 "cntlid": 131, 00:19:49.058 "qid": 0, 00:19:49.059 "state": "enabled", 00:19:49.059 "thread": "nvmf_tgt_poll_group_000", 00:19:49.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:49.059 "listen_address": { 00:19:49.059 "trtype": "TCP", 00:19:49.059 "adrfam": "IPv4", 00:19:49.059 "traddr": "10.0.0.2", 00:19:49.059 "trsvcid": "4420" 00:19:49.059 }, 00:19:49.059 "peer_address": { 00:19:49.059 "trtype": "TCP", 00:19:49.059 "adrfam": "IPv4", 00:19:49.059 "traddr": "10.0.0.1", 00:19:49.059 "trsvcid": "36384" 00:19:49.059 }, 00:19:49.059 "auth": { 00:19:49.059 "state": "completed", 00:19:49.059 "digest": "sha512", 00:19:49.059 "dhgroup": "ffdhe6144" 00:19:49.059 } 00:19:49.059 } 00:19:49.059 ]' 00:19:49.059 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.318 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.318 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.318 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.318 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.318 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.318 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.318 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.579 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:49.579 21:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:50.150 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.150 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:50.150 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.150 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.150 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.150 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.150 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:50.150 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:50.411 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:50.411 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.411 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:50.411 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:50.411 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:50.411 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.411 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.411 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.411 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.411 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.411 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.411 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.411 21:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.671 00:19:50.671 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.671 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.671 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.931 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.931 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.931 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.931 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.931 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.931 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.931 { 00:19:50.931 "cntlid": 133, 00:19:50.931 "qid": 0, 00:19:50.931 "state": "enabled", 00:19:50.931 "thread": "nvmf_tgt_poll_group_000", 00:19:50.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:50.931 "listen_address": { 00:19:50.931 "trtype": "TCP", 00:19:50.931 "adrfam": "IPv4", 00:19:50.931 "traddr": "10.0.0.2", 00:19:50.931 "trsvcid": "4420" 00:19:50.931 }, 00:19:50.931 "peer_address": { 00:19:50.931 "trtype": "TCP", 00:19:50.931 "adrfam": "IPv4", 00:19:50.931 "traddr": "10.0.0.1", 00:19:50.931 "trsvcid": "57172" 00:19:50.931 }, 00:19:50.931 "auth": { 00:19:50.931 "state": "completed", 00:19:50.931 "digest": "sha512", 00:19:50.931 "dhgroup": "ffdhe6144" 00:19:50.931 } 00:19:50.931 } 00:19:50.931 ]' 00:19:50.931 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.931 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.931 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.192 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:51.192 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.192 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.192 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.192 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.192 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:51.192 21:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.133 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.394 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.394 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:52.394 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.394 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.656 00:19:52.656 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.656 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.656 21:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.917 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.917 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.917 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.917 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.917 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.917 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.917 { 00:19:52.917 "cntlid": 135, 00:19:52.917 "qid": 0, 00:19:52.917 "state": "enabled", 00:19:52.917 "thread": "nvmf_tgt_poll_group_000", 00:19:52.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:52.917 "listen_address": { 00:19:52.917 "trtype": "TCP", 00:19:52.917 "adrfam": "IPv4", 00:19:52.917 "traddr": "10.0.0.2", 00:19:52.917 "trsvcid": "4420" 00:19:52.917 }, 00:19:52.917 "peer_address": { 00:19:52.917 "trtype": "TCP", 00:19:52.917 "adrfam": "IPv4", 00:19:52.917 "traddr": "10.0.0.1", 00:19:52.917 "trsvcid": "57196" 00:19:52.917 }, 00:19:52.917 "auth": { 00:19:52.917 "state": "completed", 00:19:52.917 "digest": "sha512", 00:19:52.917 "dhgroup": "ffdhe6144" 00:19:52.917 } 00:19:52.917 } 00:19:52.917 ]' 00:19:52.917 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.917 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.917 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.917 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.917 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.917 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.917 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.917 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.177 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:53.177 21:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:19:53.748 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.008 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.577 00:19:54.577 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.577 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.577 21:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.838 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.838 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.838 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.838 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.838 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.838 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.838 { 00:19:54.838 "cntlid": 137, 00:19:54.838 "qid": 0, 00:19:54.838 "state": "enabled", 00:19:54.838 "thread": "nvmf_tgt_poll_group_000", 00:19:54.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:54.838 "listen_address": { 00:19:54.838 "trtype": "TCP", 00:19:54.838 "adrfam": "IPv4", 00:19:54.838 "traddr": "10.0.0.2", 00:19:54.838 "trsvcid": "4420" 00:19:54.838 }, 00:19:54.838 "peer_address": { 00:19:54.838 "trtype": "TCP", 00:19:54.838 "adrfam": "IPv4", 00:19:54.838 "traddr": "10.0.0.1", 00:19:54.838 "trsvcid": "57218" 00:19:54.838 }, 00:19:54.838 "auth": { 00:19:54.838 "state": "completed", 00:19:54.838 "digest": "sha512", 00:19:54.838 "dhgroup": "ffdhe8192" 00:19:54.838 } 00:19:54.838 } 00:19:54.838 ]' 00:19:54.838 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.838 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.838 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.838 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.838 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.838 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.838 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.838 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.121 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:55.121 21:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.064 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.634 00:19:56.634 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.634 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.634 21:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.894 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.894 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.894 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.894 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.894 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.894 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.894 { 00:19:56.894 "cntlid": 139, 00:19:56.894 "qid": 0, 00:19:56.894 "state": "enabled", 00:19:56.894 "thread": "nvmf_tgt_poll_group_000", 00:19:56.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:56.894 "listen_address": { 00:19:56.894 "trtype": "TCP", 00:19:56.894 "adrfam": "IPv4", 00:19:56.894 "traddr": "10.0.0.2", 00:19:56.894 "trsvcid": "4420" 00:19:56.894 }, 00:19:56.894 "peer_address": { 00:19:56.894 "trtype": "TCP", 00:19:56.894 "adrfam": "IPv4", 00:19:56.894 "traddr": "10.0.0.1", 00:19:56.894 "trsvcid": "57246" 00:19:56.894 }, 00:19:56.894 "auth": { 00:19:56.894 "state": "completed", 00:19:56.894 "digest": "sha512", 00:19:56.894 "dhgroup": "ffdhe8192" 00:19:56.894 } 00:19:56.894 } 00:19:56.894 ]' 00:19:56.894 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.894 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.894 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.894 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.894 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.894 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.894 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.894 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.154 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:57.154 21:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: --dhchap-ctrl-secret DHHC-1:02:Y2M4YTM0OTNmYmJkZTAyMTIxNjI5NWY3NTViODA5NDJhOTU3YmRlYmNkNDU4ZTU0joAdIw==: 00:19:57.723 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.983 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.551 00:19:58.551 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.551 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.551 21:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.812 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.812 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.812 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.812 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.812 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.812 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.812 { 00:19:58.812 "cntlid": 141, 00:19:58.812 "qid": 0, 00:19:58.812 "state": "enabled", 00:19:58.812 "thread": "nvmf_tgt_poll_group_000", 00:19:58.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:58.812 "listen_address": { 00:19:58.812 "trtype": "TCP", 00:19:58.812 "adrfam": "IPv4", 00:19:58.812 "traddr": "10.0.0.2", 00:19:58.812 "trsvcid": "4420" 00:19:58.812 }, 00:19:58.812 "peer_address": { 00:19:58.812 "trtype": "TCP", 00:19:58.812 "adrfam": "IPv4", 00:19:58.812 "traddr": "10.0.0.1", 00:19:58.812 "trsvcid": "57266" 00:19:58.812 }, 00:19:58.812 "auth": { 00:19:58.812 "state": "completed", 00:19:58.812 "digest": "sha512", 00:19:58.812 "dhgroup": "ffdhe8192" 00:19:58.812 } 00:19:58.812 } 00:19:58.812 ]' 00:19:58.812 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.812 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.812 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.812 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.812 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.812 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.812 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.812 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.073 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:19:59.073 21:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:01:YmQzZTQ2YWJjMDU0MWJkMzc1N2ZkMjY3OTk5ZTNjZjFGNmiH: 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.013 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.586 00:20:00.586 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.586 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.586 21:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.845 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.845 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.845 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.845 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.845 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.845 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.845 { 00:20:00.845 "cntlid": 143, 00:20:00.845 "qid": 0, 00:20:00.845 "state": "enabled", 00:20:00.845 "thread": "nvmf_tgt_poll_group_000", 00:20:00.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:00.845 "listen_address": { 00:20:00.846 "trtype": "TCP", 00:20:00.846 "adrfam": "IPv4", 00:20:00.846 "traddr": "10.0.0.2", 00:20:00.846 "trsvcid": "4420" 00:20:00.846 }, 00:20:00.846 "peer_address": { 00:20:00.846 "trtype": "TCP", 00:20:00.846 "adrfam": "IPv4", 00:20:00.846 "traddr": "10.0.0.1", 00:20:00.846 "trsvcid": "57294" 00:20:00.846 }, 00:20:00.846 "auth": { 00:20:00.846 "state": "completed", 00:20:00.846 "digest": "sha512", 00:20:00.846 "dhgroup": "ffdhe8192" 00:20:00.846 } 00:20:00.846 } 00:20:00.846 ]' 00:20:00.846 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.846 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.846 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.846 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.846 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.846 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.846 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.846 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.106 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:20:01.106 21:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:20:01.678 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.678 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:01.678 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.678 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.939 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.939 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:01.939 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:01.939 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:01.939 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:01.939 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:01.939 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:01.939 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:01.939 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.939 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:01.940 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:01.940 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:01.940 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.940 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.940 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.940 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.940 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.940 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.940 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.940 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.511 00:20:02.511 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.511 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.511 21:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.772 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.772 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.772 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.772 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.772 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.772 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.772 { 00:20:02.772 "cntlid": 145, 00:20:02.772 "qid": 0, 00:20:02.772 "state": "enabled", 00:20:02.772 "thread": "nvmf_tgt_poll_group_000", 00:20:02.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:02.772 "listen_address": { 00:20:02.772 "trtype": "TCP", 00:20:02.772 "adrfam": "IPv4", 00:20:02.772 "traddr": "10.0.0.2", 00:20:02.772 "trsvcid": "4420" 00:20:02.772 }, 00:20:02.772 "peer_address": { 00:20:02.772 "trtype": "TCP", 00:20:02.772 "adrfam": "IPv4", 00:20:02.772 "traddr": "10.0.0.1", 00:20:02.772 "trsvcid": "59906" 00:20:02.772 }, 00:20:02.772 "auth": { 00:20:02.772 "state": "completed", 00:20:02.772 "digest": "sha512", 00:20:02.772 "dhgroup": "ffdhe8192" 00:20:02.772 } 00:20:02.772 } 00:20:02.772 ]' 00:20:02.772 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.772 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.772 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.772 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.772 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.772 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.772 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.772 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.033 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:20:03.033 21:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ZGQzZWY5YTE0MDFlMmY5NzE0YTYxN2Y3NGNiMGIzN2U5ZjZkYmEwZTQzNGMwYTVhzqRjdQ==: --dhchap-ctrl-secret DHHC-1:03:Y2ZlY2YyNmNhYjk3YzNhZTllYjFmNjgzNmM0NmNiMDBiYjg5ZTA1Y2I3MzVhZmRlYWIzZjBiODc3NGIwYWMyZJ6nz5I=: 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:03.973 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:04.232 request: 00:20:04.232 { 00:20:04.232 "name": "nvme0", 00:20:04.232 "trtype": "tcp", 00:20:04.232 "traddr": "10.0.0.2", 00:20:04.232 "adrfam": "ipv4", 00:20:04.232 "trsvcid": "4420", 00:20:04.232 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:04.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:04.232 "prchk_reftag": false, 00:20:04.232 "prchk_guard": false, 00:20:04.232 "hdgst": false, 00:20:04.232 "ddgst": false, 00:20:04.232 "dhchap_key": "key2", 00:20:04.232 "allow_unrecognized_csi": false, 00:20:04.232 "method": "bdev_nvme_attach_controller", 00:20:04.232 "req_id": 1 00:20:04.232 } 00:20:04.232 Got JSON-RPC error response 00:20:04.232 response: 00:20:04.232 { 00:20:04.232 "code": -5, 00:20:04.232 "message": "Input/output error" 00:20:04.232 } 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:04.491 21:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.058 request: 00:20:05.058 { 00:20:05.058 "name": "nvme0", 00:20:05.058 "trtype": "tcp", 00:20:05.058 "traddr": "10.0.0.2", 00:20:05.058 "adrfam": "ipv4", 00:20:05.058 "trsvcid": "4420", 00:20:05.058 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:05.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:05.058 "prchk_reftag": false, 00:20:05.058 "prchk_guard": false, 00:20:05.058 "hdgst": false, 00:20:05.058 "ddgst": false, 00:20:05.058 "dhchap_key": "key1", 00:20:05.058 "dhchap_ctrlr_key": "ckey2", 00:20:05.058 "allow_unrecognized_csi": false, 00:20:05.058 "method": "bdev_nvme_attach_controller", 00:20:05.058 "req_id": 1 00:20:05.058 } 00:20:05.058 Got JSON-RPC error response 00:20:05.058 response: 00:20:05.058 { 00:20:05.058 "code": -5, 00:20:05.058 "message": "Input/output error" 00:20:05.058 } 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.058 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.317 request: 00:20:05.317 { 00:20:05.317 "name": "nvme0", 00:20:05.317 "trtype": "tcp", 00:20:05.317 "traddr": "10.0.0.2", 00:20:05.317 "adrfam": "ipv4", 00:20:05.317 "trsvcid": "4420", 00:20:05.317 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:05.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:05.317 "prchk_reftag": false, 00:20:05.317 "prchk_guard": false, 00:20:05.317 "hdgst": false, 00:20:05.317 "ddgst": false, 00:20:05.317 "dhchap_key": "key1", 00:20:05.317 "dhchap_ctrlr_key": "ckey1", 00:20:05.317 "allow_unrecognized_csi": false, 00:20:05.317 "method": "bdev_nvme_attach_controller", 00:20:05.317 "req_id": 1 00:20:05.317 } 00:20:05.317 Got JSON-RPC error response 00:20:05.317 response: 00:20:05.317 { 00:20:05.317 "code": -5, 00:20:05.317 "message": "Input/output error" 00:20:05.317 } 00:20:05.317 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:05.317 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:05.317 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:05.317 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:05.317 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:05.317 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.317 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.317 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.317 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2069946 00:20:05.317 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2069946 ']' 00:20:05.317 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2069946 00:20:05.317 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2069946 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2069946' 00:20:05.577 killing process with pid 2069946 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2069946 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2069946 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2097757 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2097757 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2097757 ']' 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.577 21:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.837 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.837 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:05.837 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:05.837 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:05.837 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.837 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.837 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:05.837 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2097757 00:20:05.837 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2097757 ']' 00:20:05.837 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.837 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.837 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.837 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.837 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.096 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 null0 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Lv2 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.zGA ]] 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zGA 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.pHH 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.YOy ]] 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YOy 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.fXz 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.JKX ]] 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.JKX 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.aHL 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.097 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.357 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.357 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:06.357 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.357 21:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.296 nvme0n1 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.296 { 00:20:07.296 "cntlid": 1, 00:20:07.296 "qid": 0, 00:20:07.296 "state": "enabled", 00:20:07.296 "thread": "nvmf_tgt_poll_group_000", 00:20:07.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:07.296 "listen_address": { 00:20:07.296 "trtype": "TCP", 00:20:07.296 "adrfam": "IPv4", 00:20:07.296 "traddr": "10.0.0.2", 00:20:07.296 "trsvcid": "4420" 00:20:07.296 }, 00:20:07.296 "peer_address": { 00:20:07.296 "trtype": "TCP", 00:20:07.296 "adrfam": "IPv4", 00:20:07.296 "traddr": "10.0.0.1", 00:20:07.296 "trsvcid": "59942" 00:20:07.296 }, 00:20:07.296 "auth": { 00:20:07.296 "state": "completed", 00:20:07.296 "digest": "sha512", 00:20:07.296 "dhgroup": "ffdhe8192" 00:20:07.296 } 00:20:07.296 } 00:20:07.296 ]' 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.296 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.556 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:20:07.556 21:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.497 21:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.757 request: 00:20:08.757 { 00:20:08.757 "name": "nvme0", 00:20:08.757 "trtype": "tcp", 00:20:08.757 "traddr": "10.0.0.2", 00:20:08.757 "adrfam": "ipv4", 00:20:08.757 "trsvcid": "4420", 00:20:08.757 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:08.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:08.757 "prchk_reftag": false, 00:20:08.757 "prchk_guard": false, 00:20:08.757 "hdgst": false, 00:20:08.757 "ddgst": false, 00:20:08.757 "dhchap_key": "key3", 00:20:08.757 "allow_unrecognized_csi": false, 00:20:08.757 "method": "bdev_nvme_attach_controller", 00:20:08.757 "req_id": 1 00:20:08.757 } 00:20:08.757 Got JSON-RPC error response 00:20:08.757 response: 00:20:08.757 { 00:20:08.757 "code": -5, 00:20:08.757 "message": "Input/output error" 00:20:08.757 } 00:20:08.757 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:08.757 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:08.757 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:08.757 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:08.757 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:08.757 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:08.757 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:08.757 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:09.017 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:09.018 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:09.018 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:09.018 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:09.018 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.018 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:09.018 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.018 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:09.018 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.018 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.018 request: 00:20:09.018 { 00:20:09.018 "name": "nvme0", 00:20:09.018 "trtype": "tcp", 00:20:09.018 "traddr": "10.0.0.2", 00:20:09.018 "adrfam": "ipv4", 00:20:09.018 "trsvcid": "4420", 00:20:09.018 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:09.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:09.018 "prchk_reftag": false, 00:20:09.018 "prchk_guard": false, 00:20:09.018 "hdgst": false, 00:20:09.018 "ddgst": false, 00:20:09.018 "dhchap_key": "key3", 00:20:09.018 "allow_unrecognized_csi": false, 00:20:09.018 "method": "bdev_nvme_attach_controller", 00:20:09.018 "req_id": 1 00:20:09.018 } 00:20:09.018 Got JSON-RPC error response 00:20:09.018 response: 00:20:09.018 { 00:20:09.018 "code": -5, 00:20:09.018 "message": "Input/output error" 00:20:09.018 } 00:20:09.018 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:09.018 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:09.018 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:09.018 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:09.278 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:09.538 request: 00:20:09.538 { 00:20:09.538 "name": "nvme0", 00:20:09.538 "trtype": "tcp", 00:20:09.538 "traddr": "10.0.0.2", 00:20:09.538 "adrfam": "ipv4", 00:20:09.538 "trsvcid": "4420", 00:20:09.538 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:09.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:09.538 "prchk_reftag": false, 00:20:09.538 "prchk_guard": false, 00:20:09.538 "hdgst": false, 00:20:09.538 "ddgst": false, 00:20:09.538 "dhchap_key": "key0", 00:20:09.538 "dhchap_ctrlr_key": "key1", 00:20:09.538 "allow_unrecognized_csi": false, 00:20:09.538 "method": "bdev_nvme_attach_controller", 00:20:09.538 "req_id": 1 00:20:09.538 } 00:20:09.538 Got JSON-RPC error response 00:20:09.538 response: 00:20:09.538 { 00:20:09.538 "code": -5, 00:20:09.538 "message": "Input/output error" 00:20:09.538 } 00:20:09.798 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:09.798 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:09.798 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:09.798 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:09.798 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:09.798 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:09.798 21:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:09.798 nvme0n1 00:20:10.058 21:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:10.058 21:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:10.058 21:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.058 21:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.058 21:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.058 21:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.318 21:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:10.318 21:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.318 21:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.318 21:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.318 21:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:10.318 21:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:10.318 21:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:11.260 nvme0n1 00:20:11.260 21:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:11.260 21:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:11.260 21:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.260 21:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.260 21:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:11.260 21:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.260 21:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.260 21:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.260 21:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:11.260 21:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:11.260 21:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.520 21:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.520 21:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:20:11.520 21:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: --dhchap-ctrl-secret DHHC-1:03:YjdlMTNhNzA0Y2I5ZmE1NGJmMTc3OTU5M2RhNjdkMDQ2MGQ1MmNjMTE1MTgwYWQxZDEwN2NkY2I3OTg4MWU3Zeuihk0=: 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:12.461 21:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:13.034 request: 00:20:13.034 { 00:20:13.034 "name": "nvme0", 00:20:13.034 "trtype": "tcp", 00:20:13.034 "traddr": "10.0.0.2", 00:20:13.034 "adrfam": "ipv4", 00:20:13.034 "trsvcid": "4420", 00:20:13.034 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:13.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:13.034 "prchk_reftag": false, 00:20:13.034 "prchk_guard": false, 00:20:13.034 "hdgst": false, 00:20:13.034 "ddgst": false, 00:20:13.034 "dhchap_key": "key1", 00:20:13.034 "allow_unrecognized_csi": false, 00:20:13.034 "method": "bdev_nvme_attach_controller", 00:20:13.034 "req_id": 1 00:20:13.034 } 00:20:13.034 Got JSON-RPC error response 00:20:13.034 response: 00:20:13.034 { 00:20:13.034 "code": -5, 00:20:13.034 "message": "Input/output error" 00:20:13.034 } 00:20:13.034 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:13.034 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:13.034 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:13.034 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:13.034 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:13.034 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:13.034 21:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:13.971 nvme0n1 00:20:13.971 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:13.971 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:13.971 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.971 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.971 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.971 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.231 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:14.232 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.232 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.232 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.232 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:14.232 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:14.232 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:14.232 nvme0n1 00:20:14.491 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:14.491 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:14.491 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.491 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.491 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.491 21:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: '' 2s 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: ]] 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ODM2MDA4MDc1N2YzNzg0ZGFmMTM4YTVkZDdhNTAyZmFnCQyP: 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:14.751 21:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: 2s 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: ]] 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZmI1ZDk0YzBjZTQ1NzJiZDY4MWM4NDQ3YzZkODU5NWVjNTEwMzViNGYwZmZkZDQwFtVTZQ==: 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:16.660 21:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:18.713 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:18.713 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:18.713 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:18.713 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:18.713 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:18.713 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:18.713 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:18.713 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.975 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:18.975 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.975 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.975 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.975 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:18.975 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:18.975 21:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:19.917 nvme0n1 00:20:19.917 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:19.917 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.917 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.917 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.917 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:19.917 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:20.177 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:20.177 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:20.177 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.438 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.438 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:20.438 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.438 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.438 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.438 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:20.438 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:20.699 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:20.699 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:20.699 21:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.699 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.699 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:20.699 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.699 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.699 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.699 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:20.699 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:20.699 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:20.699 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:20.699 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.699 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:20.699 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.699 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:20.699 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:21.271 request: 00:20:21.271 { 00:20:21.271 "name": "nvme0", 00:20:21.271 "dhchap_key": "key1", 00:20:21.271 "dhchap_ctrlr_key": "key3", 00:20:21.271 "method": "bdev_nvme_set_keys", 00:20:21.271 "req_id": 1 00:20:21.271 } 00:20:21.271 Got JSON-RPC error response 00:20:21.271 response: 00:20:21.271 { 00:20:21.271 "code": -13, 00:20:21.271 "message": "Permission denied" 00:20:21.271 } 00:20:21.271 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:21.271 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:21.271 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:21.271 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:21.271 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:21.271 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:21.271 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.531 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:21.531 21:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:22.473 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:22.473 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:22.473 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.735 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:22.735 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:22.735 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.735 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.735 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.735 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:22.735 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:22.735 21:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:23.678 nvme0n1 00:20:23.678 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:23.678 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.678 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.678 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.678 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:23.678 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:23.678 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:23.678 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:23.678 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.678 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:23.678 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.678 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:23.678 21:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:23.941 request: 00:20:23.941 { 00:20:23.941 "name": "nvme0", 00:20:23.941 "dhchap_key": "key2", 00:20:23.941 "dhchap_ctrlr_key": "key0", 00:20:23.941 "method": "bdev_nvme_set_keys", 00:20:23.941 "req_id": 1 00:20:23.941 } 00:20:23.941 Got JSON-RPC error response 00:20:23.941 response: 00:20:23.941 { 00:20:23.941 "code": -13, 00:20:23.941 "message": "Permission denied" 00:20:23.941 } 00:20:23.941 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:23.941 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:23.941 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:23.941 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:23.941 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:23.941 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.941 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:24.201 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:24.201 21:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:25.141 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:25.141 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:25.141 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.403 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:25.403 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:25.403 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:25.403 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2070172 00:20:25.403 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2070172 ']' 00:20:25.403 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2070172 00:20:25.403 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:25.403 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.403 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2070172 00:20:25.403 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:25.403 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:25.403 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2070172' 00:20:25.403 killing process with pid 2070172 00:20:25.403 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2070172 00:20:25.403 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2070172 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:25.664 rmmod nvme_tcp 00:20:25.664 rmmod nvme_fabrics 00:20:25.664 rmmod nvme_keyring 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2097757 ']' 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2097757 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2097757 ']' 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2097757 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.664 21:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2097757 00:20:25.664 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:25.664 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:25.665 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2097757' 00:20:25.665 killing process with pid 2097757 00:20:25.665 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2097757 00:20:25.665 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2097757 00:20:25.926 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:25.926 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:25.926 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:25.926 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:20:25.926 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:20:25.926 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:25.926 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:25.926 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:25.926 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:25.926 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.926 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.926 21:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.837 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:27.837 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Lv2 /tmp/spdk.key-sha256.pHH /tmp/spdk.key-sha384.fXz /tmp/spdk.key-sha512.aHL /tmp/spdk.key-sha512.zGA /tmp/spdk.key-sha384.YOy /tmp/spdk.key-sha256.JKX '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:27.837 00:20:27.837 real 2m44.869s 00:20:27.837 user 6m4.898s 00:20:27.837 sys 0m25.157s 00:20:27.837 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.837 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.837 ************************************ 00:20:27.837 END TEST nvmf_auth_target 00:20:27.837 ************************************ 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.099 ************************************ 00:20:28.099 START TEST nvmf_bdevio_no_huge 00:20:28.099 ************************************ 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:28.099 * Looking for test storage... 00:20:28.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:28.099 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:28.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.361 --rc genhtml_branch_coverage=1 00:20:28.361 --rc genhtml_function_coverage=1 00:20:28.361 --rc genhtml_legend=1 00:20:28.361 --rc geninfo_all_blocks=1 00:20:28.361 --rc geninfo_unexecuted_blocks=1 00:20:28.361 00:20:28.361 ' 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:28.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.361 --rc genhtml_branch_coverage=1 00:20:28.361 --rc genhtml_function_coverage=1 00:20:28.361 --rc genhtml_legend=1 00:20:28.361 --rc geninfo_all_blocks=1 00:20:28.361 --rc geninfo_unexecuted_blocks=1 00:20:28.361 00:20:28.361 ' 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:28.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.361 --rc genhtml_branch_coverage=1 00:20:28.361 --rc genhtml_function_coverage=1 00:20:28.361 --rc genhtml_legend=1 00:20:28.361 --rc geninfo_all_blocks=1 00:20:28.361 --rc geninfo_unexecuted_blocks=1 00:20:28.361 00:20:28.361 ' 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:28.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.361 --rc genhtml_branch_coverage=1 00:20:28.361 --rc genhtml_function_coverage=1 00:20:28.361 --rc genhtml_legend=1 00:20:28.361 --rc geninfo_all_blocks=1 00:20:28.361 --rc geninfo_unexecuted_blocks=1 00:20:28.361 00:20:28.361 ' 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.361 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:28.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:20:28.362 21:13:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:36.503 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:36.503 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:36.503 Found net devices under 0000:31:00.0: cvl_0_0 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:36.503 Found net devices under 0000:31:00.1: cvl_0_1 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:36.503 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.504 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.765 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:36.766 21:13:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:36.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:20:36.766 00:20:36.766 --- 10.0.0.2 ping statistics --- 00:20:36.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.766 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:20:36.766 00:20:36.766 --- 10.0.0.1 ping statistics --- 00:20:36.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.766 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2106433 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2106433 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2106433 ']' 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.766 21:13:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:36.766 [2024-12-05 21:13:38.194429] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:20:36.766 [2024-12-05 21:13:38.194500] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:37.027 [2024-12-05 21:13:38.313492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.027 [2024-12-05 21:13:38.372762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.027 [2024-12-05 21:13:38.372799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.027 [2024-12-05 21:13:38.372807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.027 [2024-12-05 21:13:38.372814] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.027 [2024-12-05 21:13:38.372823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.027 [2024-12-05 21:13:38.374224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:37.027 [2024-12-05 21:13:38.374385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:37.027 [2024-12-05 21:13:38.374540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.027 [2024-12-05 21:13:38.374541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:37.598 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.598 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:37.598 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:37.598 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:37.598 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.858 [2024-12-05 21:13:39.078109] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.858 Malloc0 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.858 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.859 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.859 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.859 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.859 [2024-12-05 21:13:39.132241] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.859 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.859 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:37.859 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:37.859 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:37.859 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:37.859 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.859 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.859 { 00:20:37.859 "params": { 00:20:37.859 "name": "Nvme$subsystem", 00:20:37.859 "trtype": "$TEST_TRANSPORT", 00:20:37.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.859 "adrfam": "ipv4", 00:20:37.859 "trsvcid": "$NVMF_PORT", 00:20:37.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.859 "hdgst": ${hdgst:-false}, 00:20:37.859 "ddgst": ${ddgst:-false} 00:20:37.859 }, 00:20:37.859 "method": "bdev_nvme_attach_controller" 00:20:37.859 } 00:20:37.859 EOF 00:20:37.859 )") 00:20:37.859 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:37.859 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:37.859 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:37.859 21:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:37.859 "params": { 00:20:37.859 "name": "Nvme1", 00:20:37.859 "trtype": "tcp", 00:20:37.859 "traddr": "10.0.0.2", 00:20:37.859 "adrfam": "ipv4", 00:20:37.859 "trsvcid": "4420", 00:20:37.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.859 "hdgst": false, 00:20:37.859 "ddgst": false 00:20:37.859 }, 00:20:37.859 "method": "bdev_nvme_attach_controller" 00:20:37.859 }' 00:20:37.859 [2024-12-05 21:13:39.191885] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:20:37.859 [2024-12-05 21:13:39.191960] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2106782 ] 00:20:37.859 [2024-12-05 21:13:39.282591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:38.119 [2024-12-05 21:13:39.338367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.119 [2024-12-05 21:13:39.338484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.119 [2024-12-05 21:13:39.338488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.379 I/O targets: 00:20:38.379 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:38.379 00:20:38.379 00:20:38.379 CUnit - A unit testing framework for C - Version 2.1-3 00:20:38.379 http://cunit.sourceforge.net/ 00:20:38.379 00:20:38.379 00:20:38.379 Suite: bdevio tests on: Nvme1n1 00:20:38.379 Test: blockdev write read block ...passed 00:20:38.379 Test: blockdev write zeroes read block ...passed 00:20:38.379 Test: blockdev write zeroes read no split ...passed 00:20:38.379 Test: blockdev write zeroes read split ...passed 00:20:38.379 Test: blockdev write zeroes read split partial ...passed 00:20:38.379 Test: blockdev reset ...[2024-12-05 21:13:39.801223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:38.379 [2024-12-05 21:13:39.801292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1ef70 (9): Bad file descriptor 00:20:38.639 [2024-12-05 21:13:39.821687] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:38.639 passed 00:20:38.639 Test: blockdev write read 8 blocks ...passed 00:20:38.639 Test: blockdev write read size > 128k ...passed 00:20:38.639 Test: blockdev write read invalid size ...passed 00:20:38.639 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:38.639 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:38.639 Test: blockdev write read max offset ...passed 00:20:38.639 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:38.639 Test: blockdev writev readv 8 blocks ...passed 00:20:38.639 Test: blockdev writev readv 30 x 1block ...passed 00:20:38.639 Test: blockdev writev readv block ...passed 00:20:38.639 Test: blockdev writev readv size > 128k ...passed 00:20:38.639 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:38.639 Test: blockdev comparev and writev ...[2024-12-05 21:13:40.006433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.639 [2024-12-05 21:13:40.006460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:38.639 [2024-12-05 21:13:40.006478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.639 [2024-12-05 21:13:40.006485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.639 [2024-12-05 21:13:40.006931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.639 [2024-12-05 21:13:40.006940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:38.639 [2024-12-05 21:13:40.006951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.639 [2024-12-05 21:13:40.006957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:38.639 [2024-12-05 21:13:40.007431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.639 [2024-12-05 21:13:40.007440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:38.639 [2024-12-05 21:13:40.007450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.639 [2024-12-05 21:13:40.007456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:38.639 [2024-12-05 21:13:40.007833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.639 [2024-12-05 21:13:40.007842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:38.639 [2024-12-05 21:13:40.007852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.639 [2024-12-05 21:13:40.007857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:38.639 passed 00:20:38.900 Test: blockdev nvme passthru rw ...passed 00:20:38.900 Test: blockdev nvme passthru vendor specific ...[2024-12-05 21:13:40.092451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.900 [2024-12-05 21:13:40.092470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:38.900 [2024-12-05 21:13:40.092686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.900 [2024-12-05 21:13:40.092694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:38.900 [2024-12-05 21:13:40.093002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.900 [2024-12-05 21:13:40.093012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:38.900 [2024-12-05 21:13:40.093357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.900 [2024-12-05 21:13:40.093366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:38.900 passed 00:20:38.900 Test: blockdev nvme admin passthru ...passed 00:20:38.900 Test: blockdev copy ...passed 00:20:38.900 00:20:38.900 Run Summary: Type Total Ran Passed Failed Inactive 00:20:38.900 suites 1 1 n/a 0 0 00:20:38.900 tests 23 23 23 0 0 00:20:38.900 asserts 152 152 152 0 n/a 00:20:38.900 00:20:38.900 Elapsed time = 0.997 seconds 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.160 rmmod nvme_tcp 00:20:39.160 rmmod nvme_fabrics 00:20:39.160 rmmod nvme_keyring 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2106433 ']' 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2106433 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2106433 ']' 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2106433 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2106433 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:39.160 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2106433' 00:20:39.160 killing process with pid 2106433 00:20:39.161 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2106433 00:20:39.161 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2106433 00:20:39.420 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:39.420 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:39.420 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:39.420 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:39.420 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:39.420 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:39.420 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:39.420 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.421 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:39.421 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.421 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.421 21:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.964 21:13:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:41.964 00:20:41.964 real 0m13.542s 00:20:41.964 user 0m14.403s 00:20:41.964 sys 0m7.357s 00:20:41.964 21:13:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.964 21:13:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.964 ************************************ 00:20:41.964 END TEST nvmf_bdevio_no_huge 00:20:41.964 ************************************ 00:20:41.964 21:13:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:41.964 21:13:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:41.964 21:13:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.964 21:13:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:41.964 ************************************ 00:20:41.964 START TEST nvmf_tls 00:20:41.964 ************************************ 00:20:41.964 21:13:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:41.964 * Looking for test storage... 00:20:41.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.964 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:41.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.965 --rc genhtml_branch_coverage=1 00:20:41.965 --rc genhtml_function_coverage=1 00:20:41.965 --rc genhtml_legend=1 00:20:41.965 --rc geninfo_all_blocks=1 00:20:41.965 --rc geninfo_unexecuted_blocks=1 00:20:41.965 00:20:41.965 ' 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:41.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.965 --rc genhtml_branch_coverage=1 00:20:41.965 --rc genhtml_function_coverage=1 00:20:41.965 --rc genhtml_legend=1 00:20:41.965 --rc geninfo_all_blocks=1 00:20:41.965 --rc geninfo_unexecuted_blocks=1 00:20:41.965 00:20:41.965 ' 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:41.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.965 --rc genhtml_branch_coverage=1 00:20:41.965 --rc genhtml_function_coverage=1 00:20:41.965 --rc genhtml_legend=1 00:20:41.965 --rc geninfo_all_blocks=1 00:20:41.965 --rc geninfo_unexecuted_blocks=1 00:20:41.965 00:20:41.965 ' 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:41.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.965 --rc genhtml_branch_coverage=1 00:20:41.965 --rc genhtml_function_coverage=1 00:20:41.965 --rc genhtml_legend=1 00:20:41.965 --rc geninfo_all_blocks=1 00:20:41.965 --rc geninfo_unexecuted_blocks=1 00:20:41.965 00:20:41.965 ' 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:41.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:41.965 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.966 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.966 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.966 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:41.966 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:41.966 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:41.966 21:13:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:50.110 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:50.110 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:50.110 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:50.111 Found net devices under 0000:31:00.0: cvl_0_0 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:50.111 Found net devices under 0000:31:00.1: cvl_0_1 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.111 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:50.372 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:50.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:20:50.372 00:20:50.372 --- 10.0.0.2 ping statistics --- 00:20:50.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.373 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:20:50.373 00:20:50.373 --- 10.0.0.1 ping statistics --- 00:20:50.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.373 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2111802 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2111802 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2111802 ']' 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.373 21:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.373 [2024-12-05 21:13:51.679261] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:20:50.373 [2024-12-05 21:13:51.679354] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.373 [2024-12-05 21:13:51.791669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.634 [2024-12-05 21:13:51.842037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.634 [2024-12-05 21:13:51.842086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.634 [2024-12-05 21:13:51.842095] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.634 [2024-12-05 21:13:51.842102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.634 [2024-12-05 21:13:51.842109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.634 [2024-12-05 21:13:51.842898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.207 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.207 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:51.207 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.207 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.207 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.207 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.207 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:51.207 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:51.468 true 00:20:51.468 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.468 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:51.468 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:51.468 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:51.468 21:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:51.728 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.728 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:51.989 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:51.989 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:51.989 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:51.989 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.989 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:52.250 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:52.250 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:52.250 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:52.250 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:52.511 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:52.511 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:52.511 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:52.772 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:52.772 21:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:52.772 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:52.772 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:52.772 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:53.032 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:53.032 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.c7gGlJaiKs 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.YQsHVKBdVA 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.c7gGlJaiKs 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.YQsHVKBdVA 00:20:53.293 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:53.554 21:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:53.814 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.c7gGlJaiKs 00:20:53.814 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.c7gGlJaiKs 00:20:53.814 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:53.814 [2024-12-05 21:13:55.178073] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.814 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:54.075 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:54.335 [2024-12-05 21:13:55.510880] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.335 [2024-12-05 21:13:55.511086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.335 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:54.335 malloc0 00:20:54.335 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:54.594 21:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.c7gGlJaiKs 00:20:54.594 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:54.855 21:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.c7gGlJaiKs 00:21:07.087 Initializing NVMe Controllers 00:21:07.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:07.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:07.087 Initialization complete. Launching workers. 00:21:07.087 ======================================================== 00:21:07.087 Latency(us) 00:21:07.087 Device Information : IOPS MiB/s Average min max 00:21:07.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18646.45 72.84 3432.31 1165.81 5215.55 00:21:07.087 ======================================================== 00:21:07.087 Total : 18646.45 72.84 3432.31 1165.81 5215.55 00:21:07.087 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.c7gGlJaiKs 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.c7gGlJaiKs 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2114547 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2114547 /var/tmp/bdevperf.sock 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2114547 ']' 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.087 [2024-12-05 21:14:06.363319] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:07.087 [2024-12-05 21:14:06.363379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2114547 ] 00:21:07.087 [2024-12-05 21:14:06.427656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.087 [2024-12-05 21:14:06.457019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.c7gGlJaiKs 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:07.087 [2024-12-05 21:14:06.863409] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.087 TLSTESTn1 00:21:07.087 21:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:07.087 Running I/O for 10 seconds... 00:21:07.659 4767.00 IOPS, 18.62 MiB/s [2024-12-05T20:14:10.485Z] 5644.00 IOPS, 22.05 MiB/s [2024-12-05T20:14:11.059Z] 5869.67 IOPS, 22.93 MiB/s [2024-12-05T20:14:12.446Z] 5886.25 IOPS, 22.99 MiB/s [2024-12-05T20:14:13.386Z] 5810.20 IOPS, 22.70 MiB/s [2024-12-05T20:14:14.327Z] 5935.33 IOPS, 23.18 MiB/s [2024-12-05T20:14:15.266Z] 5728.00 IOPS, 22.38 MiB/s [2024-12-05T20:14:16.206Z] 5548.25 IOPS, 21.67 MiB/s [2024-12-05T20:14:17.145Z] 5456.56 IOPS, 21.31 MiB/s [2024-12-05T20:14:17.145Z] 5377.70 IOPS, 21.01 MiB/s 00:21:15.708 Latency(us) 00:21:15.708 [2024-12-05T20:14:17.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.708 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:15.708 Verification LBA range: start 0x0 length 0x2000 00:21:15.708 TLSTESTn1 : 10.01 5382.90 21.03 0.00 0.00 23746.57 6007.47 65972.91 00:21:15.708 [2024-12-05T20:14:17.145Z] =================================================================================================================== 00:21:15.708 [2024-12-05T20:14:17.145Z] Total : 5382.90 21.03 0.00 0.00 23746.57 6007.47 65972.91 00:21:15.708 { 00:21:15.708 "results": [ 00:21:15.708 { 00:21:15.708 "job": "TLSTESTn1", 00:21:15.708 "core_mask": "0x4", 00:21:15.708 "workload": "verify", 00:21:15.708 "status": "finished", 00:21:15.708 "verify_range": { 00:21:15.708 "start": 0, 00:21:15.708 "length": 8192 00:21:15.708 }, 00:21:15.708 "queue_depth": 128, 00:21:15.708 "io_size": 4096, 00:21:15.708 "runtime": 10.014112, 00:21:15.708 "iops": 5382.903646374237, 00:21:15.708 "mibps": 21.026967368649363, 00:21:15.708 "io_failed": 0, 00:21:15.708 "io_timeout": 0, 00:21:15.708 "avg_latency_us": 23746.568801657235, 00:21:15.708 "min_latency_us": 6007.466666666666, 00:21:15.708 "max_latency_us": 65972.90666666666 00:21:15.708 } 00:21:15.708 ], 00:21:15.708 "core_count": 1 00:21:15.708 } 00:21:15.708 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:15.708 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2114547 00:21:15.708 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2114547 ']' 00:21:15.708 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2114547 00:21:15.708 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:15.708 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.708 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2114547 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2114547' 00:21:15.969 killing process with pid 2114547 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2114547 00:21:15.969 Received shutdown signal, test time was about 10.000000 seconds 00:21:15.969 00:21:15.969 Latency(us) 00:21:15.969 [2024-12-05T20:14:17.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.969 [2024-12-05T20:14:17.406Z] =================================================================================================================== 00:21:15.969 [2024-12-05T20:14:17.406Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2114547 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YQsHVKBdVA 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YQsHVKBdVA 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YQsHVKBdVA 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YQsHVKBdVA 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2116770 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2116770 /var/tmp/bdevperf.sock 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2116770 ']' 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.969 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.969 [2024-12-05 21:14:17.333406] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:15.969 [2024-12-05 21:14:17.333514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116770 ] 00:21:15.969 [2024-12-05 21:14:17.400091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.230 [2024-12-05 21:14:17.428781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.230 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.230 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:16.230 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YQsHVKBdVA 00:21:16.490 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:16.490 [2024-12-05 21:14:17.847294] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.490 [2024-12-05 21:14:17.851855] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:16.490 [2024-12-05 21:14:17.852483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a75b0 (107): Transport endpoint is not connected 00:21:16.490 [2024-12-05 21:14:17.853478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a75b0 (9): Bad file descriptor 00:21:16.490 [2024-12-05 21:14:17.854480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:16.490 [2024-12-05 21:14:17.854488] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:16.490 [2024-12-05 21:14:17.854495] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:16.490 [2024-12-05 21:14:17.854503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:16.490 request: 00:21:16.490 { 00:21:16.490 "name": "TLSTEST", 00:21:16.490 "trtype": "tcp", 00:21:16.490 "traddr": "10.0.0.2", 00:21:16.490 "adrfam": "ipv4", 00:21:16.490 "trsvcid": "4420", 00:21:16.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.490 "prchk_reftag": false, 00:21:16.490 "prchk_guard": false, 00:21:16.490 "hdgst": false, 00:21:16.490 "ddgst": false, 00:21:16.490 "psk": "key0", 00:21:16.490 "allow_unrecognized_csi": false, 00:21:16.490 "method": "bdev_nvme_attach_controller", 00:21:16.490 "req_id": 1 00:21:16.490 } 00:21:16.490 Got JSON-RPC error response 00:21:16.490 response: 00:21:16.490 { 00:21:16.490 "code": -5, 00:21:16.490 "message": "Input/output error" 00:21:16.490 } 00:21:16.490 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2116770 00:21:16.490 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2116770 ']' 00:21:16.490 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2116770 00:21:16.490 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:16.490 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.490 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2116770 00:21:16.751 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:16.751 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:16.751 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2116770' 00:21:16.751 killing process with pid 2116770 00:21:16.751 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2116770 00:21:16.751 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.751 00:21:16.751 Latency(us) 00:21:16.751 [2024-12-05T20:14:18.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.751 [2024-12-05T20:14:18.188Z] =================================================================================================================== 00:21:16.751 [2024-12-05T20:14:18.188Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:16.751 21:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2116770 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.c7gGlJaiKs 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.c7gGlJaiKs 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.c7gGlJaiKs 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.c7gGlJaiKs 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2116903 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2116903 /var/tmp/bdevperf.sock 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2116903 ']' 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.751 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.751 [2024-12-05 21:14:18.100905] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:16.751 [2024-12-05 21:14:18.100960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116903 ] 00:21:16.751 [2024-12-05 21:14:18.165445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.011 [2024-12-05 21:14:18.193575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.011 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.012 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:17.012 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.c7gGlJaiKs 00:21:17.273 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:17.273 [2024-12-05 21:14:18.603973] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:17.273 [2024-12-05 21:14:18.612112] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:17.273 [2024-12-05 21:14:18.612131] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:17.273 [2024-12-05 21:14:18.612150] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:17.273 [2024-12-05 21:14:18.612233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c305b0 (107): Transport endpoint is not connected 00:21:17.273 [2024-12-05 21:14:18.613221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c305b0 (9): Bad file descriptor 00:21:17.273 [2024-12-05 21:14:18.614223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:17.273 [2024-12-05 21:14:18.614231] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:17.274 [2024-12-05 21:14:18.614237] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:17.274 [2024-12-05 21:14:18.614245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:17.274 request: 00:21:17.274 { 00:21:17.274 "name": "TLSTEST", 00:21:17.274 "trtype": "tcp", 00:21:17.274 "traddr": "10.0.0.2", 00:21:17.274 "adrfam": "ipv4", 00:21:17.274 "trsvcid": "4420", 00:21:17.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.274 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:17.274 "prchk_reftag": false, 00:21:17.274 "prchk_guard": false, 00:21:17.274 "hdgst": false, 00:21:17.274 "ddgst": false, 00:21:17.274 "psk": "key0", 00:21:17.274 "allow_unrecognized_csi": false, 00:21:17.274 "method": "bdev_nvme_attach_controller", 00:21:17.274 "req_id": 1 00:21:17.274 } 00:21:17.274 Got JSON-RPC error response 00:21:17.274 response: 00:21:17.274 { 00:21:17.274 "code": -5, 00:21:17.274 "message": "Input/output error" 00:21:17.274 } 00:21:17.274 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2116903 00:21:17.274 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2116903 ']' 00:21:17.274 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2116903 00:21:17.274 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:17.274 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.274 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2116903 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2116903' 00:21:17.537 killing process with pid 2116903 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2116903 00:21:17.537 Received shutdown signal, test time was about 10.000000 seconds 00:21:17.537 00:21:17.537 Latency(us) 00:21:17.537 [2024-12-05T20:14:18.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.537 [2024-12-05T20:14:18.974Z] =================================================================================================================== 00:21:17.537 [2024-12-05T20:14:18.974Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2116903 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.c7gGlJaiKs 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.c7gGlJaiKs 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.c7gGlJaiKs 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.c7gGlJaiKs 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2117036 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2117036 /var/tmp/bdevperf.sock 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2117036 ']' 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.537 21:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.537 [2024-12-05 21:14:18.860559] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:17.537 [2024-12-05 21:14:18.860616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117036 ] 00:21:17.537 [2024-12-05 21:14:18.924180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.537 [2024-12-05 21:14:18.952727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.798 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.798 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:17.798 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.c7gGlJaiKs 00:21:17.798 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:18.058 [2024-12-05 21:14:19.335035] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.058 [2024-12-05 21:14:19.343883] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:18.058 [2024-12-05 21:14:19.343902] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:18.058 [2024-12-05 21:14:19.343920] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:18.058 [2024-12-05 21:14:19.344031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191d5b0 (107): Transport endpoint is not connected 00:21:18.058 [2024-12-05 21:14:19.345018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191d5b0 (9): Bad file descriptor 00:21:18.058 [2024-12-05 21:14:19.346020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:21:18.058 [2024-12-05 21:14:19.346028] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:18.058 [2024-12-05 21:14:19.346033] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:18.058 [2024-12-05 21:14:19.346041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:21:18.058 request: 00:21:18.058 { 00:21:18.058 "name": "TLSTEST", 00:21:18.058 "trtype": "tcp", 00:21:18.058 "traddr": "10.0.0.2", 00:21:18.058 "adrfam": "ipv4", 00:21:18.058 "trsvcid": "4420", 00:21:18.058 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:18.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.059 "prchk_reftag": false, 00:21:18.059 "prchk_guard": false, 00:21:18.059 "hdgst": false, 00:21:18.059 "ddgst": false, 00:21:18.059 "psk": "key0", 00:21:18.059 "allow_unrecognized_csi": false, 00:21:18.059 "method": "bdev_nvme_attach_controller", 00:21:18.059 "req_id": 1 00:21:18.059 } 00:21:18.059 Got JSON-RPC error response 00:21:18.059 response: 00:21:18.059 { 00:21:18.059 "code": -5, 00:21:18.059 "message": "Input/output error" 00:21:18.059 } 00:21:18.059 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2117036 00:21:18.059 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2117036 ']' 00:21:18.059 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2117036 00:21:18.059 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:18.059 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.059 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117036 00:21:18.059 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:18.059 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:18.059 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117036' 00:21:18.059 killing process with pid 2117036 00:21:18.059 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2117036 00:21:18.059 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.059 00:21:18.059 Latency(us) 00:21:18.059 [2024-12-05T20:14:19.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.059 [2024-12-05T20:14:19.496Z] =================================================================================================================== 00:21:18.059 [2024-12-05T20:14:19.496Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:18.059 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2117036 00:21:18.359 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:18.359 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:18.359 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:18.359 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:18.359 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:18.359 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:18.359 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:18.359 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:18.359 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:18.359 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.359 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:18.359 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2117246 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2117246 /var/tmp/bdevperf.sock 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2117246 ']' 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.360 [2024-12-05 21:14:19.573149] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:18.360 [2024-12-05 21:14:19.573204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117246 ] 00:21:18.360 [2024-12-05 21:14:19.637494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.360 [2024-12-05 21:14:19.665401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:18.360 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:18.717 [2024-12-05 21:14:19.903181] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:18.717 [2024-12-05 21:14:19.903211] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:18.717 request: 00:21:18.717 { 00:21:18.717 "name": "key0", 00:21:18.717 "path": "", 00:21:18.717 "method": "keyring_file_add_key", 00:21:18.717 "req_id": 1 00:21:18.717 } 00:21:18.717 Got JSON-RPC error response 00:21:18.717 response: 00:21:18.717 { 00:21:18.717 "code": -1, 00:21:18.717 "message": "Operation not permitted" 00:21:18.717 } 00:21:18.717 21:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:18.717 [2024-12-05 21:14:20.083720] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.717 [2024-12-05 21:14:20.083753] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:18.717 request: 00:21:18.717 { 00:21:18.717 "name": "TLSTEST", 00:21:18.717 "trtype": "tcp", 00:21:18.717 "traddr": "10.0.0.2", 00:21:18.717 "adrfam": "ipv4", 00:21:18.717 "trsvcid": "4420", 00:21:18.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.717 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.717 "prchk_reftag": false, 00:21:18.717 "prchk_guard": false, 00:21:18.717 "hdgst": false, 00:21:18.717 "ddgst": false, 00:21:18.717 "psk": "key0", 00:21:18.717 "allow_unrecognized_csi": false, 00:21:18.717 "method": "bdev_nvme_attach_controller", 00:21:18.717 "req_id": 1 00:21:18.717 } 00:21:18.717 Got JSON-RPC error response 00:21:18.717 response: 00:21:18.717 { 00:21:18.717 "code": -126, 00:21:18.717 "message": "Required key not available" 00:21:18.717 } 00:21:18.717 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2117246 00:21:18.717 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2117246 ']' 00:21:18.717 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2117246 00:21:18.717 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:18.717 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.717 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117246 00:21:19.007 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:19.007 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:19.007 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117246' 00:21:19.007 killing process with pid 2117246 00:21:19.007 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2117246 00:21:19.007 Received shutdown signal, test time was about 10.000000 seconds 00:21:19.007 00:21:19.007 Latency(us) 00:21:19.007 [2024-12-05T20:14:20.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.007 [2024-12-05T20:14:20.444Z] =================================================================================================================== 00:21:19.007 [2024-12-05T20:14:20.444Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:19.007 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2117246 00:21:19.007 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:19.007 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:19.007 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:19.007 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:19.007 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:19.008 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2111802 00:21:19.008 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2111802 ']' 00:21:19.008 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2111802 00:21:19.008 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:19.008 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.008 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2111802 00:21:19.008 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:19.008 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:19.008 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2111802' 00:21:19.008 killing process with pid 2111802 00:21:19.008 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2111802 00:21:19.008 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2111802 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.P9dZDKvZGJ 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.P9dZDKvZGJ 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2117398 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2117398 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2117398 ']' 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.269 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.269 [2024-12-05 21:14:20.570353] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:19.269 [2024-12-05 21:14:20.570401] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.269 [2024-12-05 21:14:20.629121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.269 [2024-12-05 21:14:20.657244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.269 [2024-12-05 21:14:20.657269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.269 [2024-12-05 21:14:20.657276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.269 [2024-12-05 21:14:20.657282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.269 [2024-12-05 21:14:20.657286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.269 [2024-12-05 21:14:20.657769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.530 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.530 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:19.530 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:19.530 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:19.530 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.530 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.530 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.P9dZDKvZGJ 00:21:19.530 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.P9dZDKvZGJ 00:21:19.530 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:19.530 [2024-12-05 21:14:20.925067] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.530 21:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:19.791 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:20.051 [2024-12-05 21:14:21.261898] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:20.051 [2024-12-05 21:14:21.262103] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.051 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:20.051 malloc0 00:21:20.051 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:20.312 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.P9dZDKvZGJ 00:21:20.312 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.P9dZDKvZGJ 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.P9dZDKvZGJ 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2117636 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2117636 /var/tmp/bdevperf.sock 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2117636 ']' 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.572 21:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.572 [2024-12-05 21:14:21.939485] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:20.573 [2024-12-05 21:14:21.939537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117636 ] 00:21:20.833 [2024-12-05 21:14:22.019878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.833 [2024-12-05 21:14:22.048664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.833 21:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.833 21:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:20.833 21:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.P9dZDKvZGJ 00:21:21.095 21:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:21.095 [2024-12-05 21:14:22.471024] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:21.356 TLSTESTn1 00:21:21.356 21:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:21.356 Running I/O for 10 seconds... 00:21:23.239 5451.00 IOPS, 21.29 MiB/s [2024-12-05T20:14:26.062Z] 5796.00 IOPS, 22.64 MiB/s [2024-12-05T20:14:27.003Z] 5931.33 IOPS, 23.17 MiB/s [2024-12-05T20:14:27.946Z] 5677.25 IOPS, 22.18 MiB/s [2024-12-05T20:14:28.887Z] 5431.80 IOPS, 21.22 MiB/s [2024-12-05T20:14:29.829Z] 5381.00 IOPS, 21.02 MiB/s [2024-12-05T20:14:30.773Z] 5305.14 IOPS, 20.72 MiB/s [2024-12-05T20:14:31.715Z] 5236.12 IOPS, 20.45 MiB/s [2024-12-05T20:14:33.103Z] 5129.78 IOPS, 20.04 MiB/s [2024-12-05T20:14:33.103Z] 5121.00 IOPS, 20.00 MiB/s 00:21:31.666 Latency(us) 00:21:31.666 [2024-12-05T20:14:33.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.666 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:31.666 Verification LBA range: start 0x0 length 0x2000 00:21:31.666 TLSTESTn1 : 10.01 5126.76 20.03 0.00 0.00 24934.70 4669.44 61603.84 00:21:31.666 [2024-12-05T20:14:33.103Z] =================================================================================================================== 00:21:31.666 [2024-12-05T20:14:33.103Z] Total : 5126.76 20.03 0.00 0.00 24934.70 4669.44 61603.84 00:21:31.666 { 00:21:31.666 "results": [ 00:21:31.666 { 00:21:31.666 "job": "TLSTESTn1", 00:21:31.666 "core_mask": "0x4", 00:21:31.666 "workload": "verify", 00:21:31.666 "status": "finished", 00:21:31.666 "verify_range": { 00:21:31.666 "start": 0, 00:21:31.666 "length": 8192 00:21:31.666 }, 00:21:31.666 "queue_depth": 128, 00:21:31.666 "io_size": 4096, 00:21:31.666 "runtime": 10.013534, 00:21:31.666 "iops": 5126.7614410656615, 00:21:31.666 "mibps": 20.02641187916274, 00:21:31.666 "io_failed": 0, 00:21:31.666 "io_timeout": 0, 00:21:31.666 "avg_latency_us": 24934.700462694225, 00:21:31.666 "min_latency_us": 4669.44, 00:21:31.666 "max_latency_us": 61603.84 00:21:31.666 } 00:21:31.666 ], 00:21:31.666 "core_count": 1 00:21:31.666 } 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2117636 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2117636 ']' 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2117636 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117636 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117636' 00:21:31.666 killing process with pid 2117636 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2117636 00:21:31.666 Received shutdown signal, test time was about 10.000000 seconds 00:21:31.666 00:21:31.666 Latency(us) 00:21:31.666 [2024-12-05T20:14:33.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.666 [2024-12-05T20:14:33.103Z] =================================================================================================================== 00:21:31.666 [2024-12-05T20:14:33.103Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2117636 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.P9dZDKvZGJ 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.P9dZDKvZGJ 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.P9dZDKvZGJ 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.P9dZDKvZGJ 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.P9dZDKvZGJ 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.666 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2119885 00:21:31.667 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:31.667 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2119885 /var/tmp/bdevperf.sock 00:21:31.667 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:31.667 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2119885 ']' 00:21:31.667 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.667 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.667 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.667 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.667 21:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.667 [2024-12-05 21:14:32.961536] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:31.667 [2024-12-05 21:14:32.961603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119885 ] 00:21:31.667 [2024-12-05 21:14:33.026989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.667 [2024-12-05 21:14:33.056521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.928 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.928 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:31.928 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.P9dZDKvZGJ 00:21:31.928 [2024-12-05 21:14:33.294528] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.P9dZDKvZGJ': 0100666 00:21:31.928 [2024-12-05 21:14:33.294556] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:31.928 request: 00:21:31.928 { 00:21:31.928 "name": "key0", 00:21:31.928 "path": "/tmp/tmp.P9dZDKvZGJ", 00:21:31.928 "method": "keyring_file_add_key", 00:21:31.928 "req_id": 1 00:21:31.928 } 00:21:31.928 Got JSON-RPC error response 00:21:31.928 response: 00:21:31.928 { 00:21:31.928 "code": -1, 00:21:31.928 "message": "Operation not permitted" 00:21:31.928 } 00:21:31.928 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:32.188 [2024-12-05 21:14:33.471043] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:32.188 [2024-12-05 21:14:33.471073] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:32.188 request: 00:21:32.188 { 00:21:32.188 "name": "TLSTEST", 00:21:32.188 "trtype": "tcp", 00:21:32.188 "traddr": "10.0.0.2", 00:21:32.188 "adrfam": "ipv4", 00:21:32.188 "trsvcid": "4420", 00:21:32.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:32.188 "prchk_reftag": false, 00:21:32.188 "prchk_guard": false, 00:21:32.188 "hdgst": false, 00:21:32.188 "ddgst": false, 00:21:32.188 "psk": "key0", 00:21:32.188 "allow_unrecognized_csi": false, 00:21:32.188 "method": "bdev_nvme_attach_controller", 00:21:32.188 "req_id": 1 00:21:32.188 } 00:21:32.188 Got JSON-RPC error response 00:21:32.188 response: 00:21:32.188 { 00:21:32.188 "code": -126, 00:21:32.188 "message": "Required key not available" 00:21:32.188 } 00:21:32.188 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2119885 00:21:32.188 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2119885 ']' 00:21:32.188 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2119885 00:21:32.188 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:32.188 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.189 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2119885 00:21:32.189 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:32.189 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:32.189 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2119885' 00:21:32.189 killing process with pid 2119885 00:21:32.189 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2119885 00:21:32.189 Received shutdown signal, test time was about 10.000000 seconds 00:21:32.189 00:21:32.189 Latency(us) 00:21:32.189 [2024-12-05T20:14:33.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.189 [2024-12-05T20:14:33.626Z] =================================================================================================================== 00:21:32.189 [2024-12-05T20:14:33.626Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:32.189 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2119885 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2117398 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2117398 ']' 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2117398 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117398 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117398' 00:21:32.449 killing process with pid 2117398 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2117398 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2117398 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2119998 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2119998 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2119998 ']' 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.449 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.450 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.450 21:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.710 [2024-12-05 21:14:33.913021] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:32.711 [2024-12-05 21:14:33.913090] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.711 [2024-12-05 21:14:34.006531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.711 [2024-12-05 21:14:34.034276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.711 [2024-12-05 21:14:34.034306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.711 [2024-12-05 21:14:34.034312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.711 [2024-12-05 21:14:34.034317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.711 [2024-12-05 21:14:34.034321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.711 [2024-12-05 21:14:34.034797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.711 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.711 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:32.711 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:32.711 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.711 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.972 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.972 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.P9dZDKvZGJ 00:21:32.972 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:32.972 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.P9dZDKvZGJ 00:21:32.972 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:32.972 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.972 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:32.972 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.972 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.P9dZDKvZGJ 00:21:32.972 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.P9dZDKvZGJ 00:21:32.972 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:32.972 [2024-12-05 21:14:34.306108] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.972 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:33.233 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:33.233 [2024-12-05 21:14:34.642936] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:33.233 [2024-12-05 21:14:34.643148] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.233 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:33.493 malloc0 00:21:33.493 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:33.753 21:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.P9dZDKvZGJ 00:21:33.753 [2024-12-05 21:14:35.118022] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.P9dZDKvZGJ': 0100666 00:21:33.753 [2024-12-05 21:14:35.118045] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:33.753 request: 00:21:33.753 { 00:21:33.753 "name": "key0", 00:21:33.753 "path": "/tmp/tmp.P9dZDKvZGJ", 00:21:33.753 "method": "keyring_file_add_key", 00:21:33.753 "req_id": 1 00:21:33.753 } 00:21:33.753 Got JSON-RPC error response 00:21:33.753 response: 00:21:33.753 { 00:21:33.753 "code": -1, 00:21:33.753 "message": "Operation not permitted" 00:21:33.753 } 00:21:33.753 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:34.013 [2024-12-05 21:14:35.270412] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:34.013 [2024-12-05 21:14:35.270438] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:34.013 request: 00:21:34.013 { 00:21:34.013 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.013 "host": "nqn.2016-06.io.spdk:host1", 00:21:34.013 "psk": "key0", 00:21:34.013 "method": "nvmf_subsystem_add_host", 00:21:34.013 "req_id": 1 00:21:34.013 } 00:21:34.013 Got JSON-RPC error response 00:21:34.013 response: 00:21:34.013 { 00:21:34.013 "code": -32603, 00:21:34.013 "message": "Internal error" 00:21:34.013 } 00:21:34.013 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:34.013 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:34.013 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:34.013 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:34.013 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2119998 00:21:34.013 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2119998 ']' 00:21:34.013 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2119998 00:21:34.013 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:34.013 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.013 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2119998 00:21:34.013 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:34.013 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:34.013 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2119998' 00:21:34.013 killing process with pid 2119998 00:21:34.013 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2119998 00:21:34.013 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2119998 00:21:34.273 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.P9dZDKvZGJ 00:21:34.273 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:34.273 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:34.273 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:34.273 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.273 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2120364 00:21:34.273 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2120364 00:21:34.273 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:34.273 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2120364 ']' 00:21:34.273 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.273 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.273 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.273 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.273 21:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.273 [2024-12-05 21:14:35.529338] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:34.273 [2024-12-05 21:14:35.529396] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.273 [2024-12-05 21:14:35.626300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.273 [2024-12-05 21:14:35.655940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.273 [2024-12-05 21:14:35.655967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.273 [2024-12-05 21:14:35.655973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.273 [2024-12-05 21:14:35.655978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.273 [2024-12-05 21:14:35.655982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.273 [2024-12-05 21:14:35.656424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.213 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.213 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:35.213 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:35.213 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.213 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.213 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.213 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.P9dZDKvZGJ 00:21:35.213 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.P9dZDKvZGJ 00:21:35.213 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:35.213 [2024-12-05 21:14:36.517103] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.213 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:35.473 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:35.474 [2024-12-05 21:14:36.849915] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:35.474 [2024-12-05 21:14:36.850114] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.474 21:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:35.734 malloc0 00:21:35.734 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:35.993 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.P9dZDKvZGJ 00:21:35.993 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:36.252 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2120730 00:21:36.252 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:36.252 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:36.252 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2120730 /var/tmp/bdevperf.sock 00:21:36.252 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2120730 ']' 00:21:36.252 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.252 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.252 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.252 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.252 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.252 [2024-12-05 21:14:37.539776] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:36.252 [2024-12-05 21:14:37.539830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120730 ] 00:21:36.252 [2024-12-05 21:14:37.603171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.252 [2024-12-05 21:14:37.632184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.512 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.512 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:36.512 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.P9dZDKvZGJ 00:21:36.513 21:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:36.773 [2024-12-05 21:14:38.018615] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.773 TLSTESTn1 00:21:36.773 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:37.034 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:37.034 "subsystems": [ 00:21:37.034 { 00:21:37.034 "subsystem": "keyring", 00:21:37.034 "config": [ 00:21:37.034 { 00:21:37.034 "method": "keyring_file_add_key", 00:21:37.034 "params": { 00:21:37.034 "name": "key0", 00:21:37.034 "path": "/tmp/tmp.P9dZDKvZGJ" 00:21:37.034 } 00:21:37.034 } 00:21:37.034 ] 00:21:37.034 }, 00:21:37.034 { 00:21:37.034 "subsystem": "iobuf", 00:21:37.034 "config": [ 00:21:37.034 { 00:21:37.034 "method": "iobuf_set_options", 00:21:37.034 "params": { 00:21:37.034 "small_pool_count": 8192, 00:21:37.034 "large_pool_count": 1024, 00:21:37.034 "small_bufsize": 8192, 00:21:37.034 "large_bufsize": 135168, 00:21:37.034 "enable_numa": false 00:21:37.034 } 00:21:37.034 } 00:21:37.034 ] 00:21:37.034 }, 00:21:37.034 { 00:21:37.034 "subsystem": "sock", 00:21:37.034 "config": [ 00:21:37.034 { 00:21:37.034 "method": "sock_set_default_impl", 00:21:37.034 "params": { 00:21:37.034 "impl_name": "posix" 00:21:37.034 } 00:21:37.034 }, 00:21:37.034 { 00:21:37.034 "method": "sock_impl_set_options", 00:21:37.034 "params": { 00:21:37.034 "impl_name": "ssl", 00:21:37.034 "recv_buf_size": 4096, 00:21:37.034 "send_buf_size": 4096, 00:21:37.034 "enable_recv_pipe": true, 00:21:37.034 "enable_quickack": false, 00:21:37.034 "enable_placement_id": 0, 00:21:37.034 "enable_zerocopy_send_server": true, 00:21:37.034 "enable_zerocopy_send_client": false, 00:21:37.034 "zerocopy_threshold": 0, 00:21:37.034 "tls_version": 0, 00:21:37.034 "enable_ktls": false 00:21:37.034 } 00:21:37.034 }, 00:21:37.034 { 00:21:37.034 "method": "sock_impl_set_options", 00:21:37.034 "params": { 00:21:37.034 "impl_name": "posix", 00:21:37.034 "recv_buf_size": 2097152, 00:21:37.034 "send_buf_size": 2097152, 00:21:37.034 "enable_recv_pipe": true, 00:21:37.034 "enable_quickack": false, 00:21:37.034 "enable_placement_id": 0, 00:21:37.034 "enable_zerocopy_send_server": true, 00:21:37.034 "enable_zerocopy_send_client": false, 00:21:37.034 "zerocopy_threshold": 0, 00:21:37.034 "tls_version": 0, 00:21:37.034 "enable_ktls": false 00:21:37.034 } 00:21:37.034 } 00:21:37.034 ] 00:21:37.034 }, 00:21:37.034 { 00:21:37.034 "subsystem": "vmd", 00:21:37.034 "config": [] 00:21:37.034 }, 00:21:37.034 { 00:21:37.034 "subsystem": "accel", 00:21:37.034 "config": [ 00:21:37.034 { 00:21:37.034 "method": "accel_set_options", 00:21:37.034 "params": { 00:21:37.034 "small_cache_size": 128, 00:21:37.034 "large_cache_size": 16, 00:21:37.034 "task_count": 2048, 00:21:37.034 "sequence_count": 2048, 00:21:37.034 "buf_count": 2048 00:21:37.034 } 00:21:37.034 } 00:21:37.034 ] 00:21:37.034 }, 00:21:37.034 { 00:21:37.034 "subsystem": "bdev", 00:21:37.034 "config": [ 00:21:37.034 { 00:21:37.034 "method": "bdev_set_options", 00:21:37.034 "params": { 00:21:37.034 "bdev_io_pool_size": 65535, 00:21:37.034 "bdev_io_cache_size": 256, 00:21:37.034 "bdev_auto_examine": true, 00:21:37.034 "iobuf_small_cache_size": 128, 00:21:37.034 "iobuf_large_cache_size": 16 00:21:37.034 } 00:21:37.034 }, 00:21:37.034 { 00:21:37.034 "method": "bdev_raid_set_options", 00:21:37.034 "params": { 00:21:37.034 "process_window_size_kb": 1024, 00:21:37.034 "process_max_bandwidth_mb_sec": 0 00:21:37.034 } 00:21:37.034 }, 00:21:37.034 { 00:21:37.034 "method": "bdev_iscsi_set_options", 00:21:37.034 "params": { 00:21:37.034 "timeout_sec": 30 00:21:37.034 } 00:21:37.034 }, 00:21:37.034 { 00:21:37.034 "method": "bdev_nvme_set_options", 00:21:37.034 "params": { 00:21:37.034 "action_on_timeout": "none", 00:21:37.034 "timeout_us": 0, 00:21:37.034 "timeout_admin_us": 0, 00:21:37.034 "keep_alive_timeout_ms": 10000, 00:21:37.034 "arbitration_burst": 0, 00:21:37.034 "low_priority_weight": 0, 00:21:37.034 "medium_priority_weight": 0, 00:21:37.034 "high_priority_weight": 0, 00:21:37.034 "nvme_adminq_poll_period_us": 10000, 00:21:37.034 "nvme_ioq_poll_period_us": 0, 00:21:37.034 "io_queue_requests": 0, 00:21:37.035 "delay_cmd_submit": true, 00:21:37.035 "transport_retry_count": 4, 00:21:37.035 "bdev_retry_count": 3, 00:21:37.035 "transport_ack_timeout": 0, 00:21:37.035 "ctrlr_loss_timeout_sec": 0, 00:21:37.035 "reconnect_delay_sec": 0, 00:21:37.035 "fast_io_fail_timeout_sec": 0, 00:21:37.035 "disable_auto_failback": false, 00:21:37.035 "generate_uuids": false, 00:21:37.035 "transport_tos": 0, 00:21:37.035 "nvme_error_stat": false, 00:21:37.035 "rdma_srq_size": 0, 00:21:37.035 "io_path_stat": false, 00:21:37.035 "allow_accel_sequence": false, 00:21:37.035 "rdma_max_cq_size": 0, 00:21:37.035 "rdma_cm_event_timeout_ms": 0, 00:21:37.035 "dhchap_digests": [ 00:21:37.035 "sha256", 00:21:37.035 "sha384", 00:21:37.035 "sha512" 00:21:37.035 ], 00:21:37.035 "dhchap_dhgroups": [ 00:21:37.035 "null", 00:21:37.035 "ffdhe2048", 00:21:37.035 "ffdhe3072", 00:21:37.035 "ffdhe4096", 00:21:37.035 "ffdhe6144", 00:21:37.035 "ffdhe8192" 00:21:37.035 ] 00:21:37.035 } 00:21:37.035 }, 00:21:37.035 { 00:21:37.035 "method": "bdev_nvme_set_hotplug", 00:21:37.035 "params": { 00:21:37.035 "period_us": 100000, 00:21:37.035 "enable": false 00:21:37.035 } 00:21:37.035 }, 00:21:37.035 { 00:21:37.035 "method": "bdev_malloc_create", 00:21:37.035 "params": { 00:21:37.035 "name": "malloc0", 00:21:37.035 "num_blocks": 8192, 00:21:37.035 "block_size": 4096, 00:21:37.035 "physical_block_size": 4096, 00:21:37.035 "uuid": "e53833f1-5aef-427f-9175-03051cdac5ca", 00:21:37.035 "optimal_io_boundary": 0, 00:21:37.035 "md_size": 0, 00:21:37.035 "dif_type": 0, 00:21:37.035 "dif_is_head_of_md": false, 00:21:37.035 "dif_pi_format": 0 00:21:37.035 } 00:21:37.035 }, 00:21:37.035 { 00:21:37.035 "method": "bdev_wait_for_examine" 00:21:37.035 } 00:21:37.035 ] 00:21:37.035 }, 00:21:37.035 { 00:21:37.035 "subsystem": "nbd", 00:21:37.035 "config": [] 00:21:37.035 }, 00:21:37.035 { 00:21:37.035 "subsystem": "scheduler", 00:21:37.035 "config": [ 00:21:37.035 { 00:21:37.035 "method": "framework_set_scheduler", 00:21:37.035 "params": { 00:21:37.035 "name": "static" 00:21:37.035 } 00:21:37.035 } 00:21:37.035 ] 00:21:37.035 }, 00:21:37.035 { 00:21:37.035 "subsystem": "nvmf", 00:21:37.035 "config": [ 00:21:37.035 { 00:21:37.035 "method": "nvmf_set_config", 00:21:37.035 "params": { 00:21:37.035 "discovery_filter": "match_any", 00:21:37.035 "admin_cmd_passthru": { 00:21:37.035 "identify_ctrlr": false 00:21:37.035 }, 00:21:37.035 "dhchap_digests": [ 00:21:37.035 "sha256", 00:21:37.035 "sha384", 00:21:37.035 "sha512" 00:21:37.035 ], 00:21:37.035 "dhchap_dhgroups": [ 00:21:37.035 "null", 00:21:37.035 "ffdhe2048", 00:21:37.035 "ffdhe3072", 00:21:37.035 "ffdhe4096", 00:21:37.035 "ffdhe6144", 00:21:37.035 "ffdhe8192" 00:21:37.035 ] 00:21:37.035 } 00:21:37.035 }, 00:21:37.035 { 00:21:37.035 "method": "nvmf_set_max_subsystems", 00:21:37.035 "params": { 00:21:37.035 "max_subsystems": 1024 00:21:37.035 } 00:21:37.035 }, 00:21:37.035 { 00:21:37.035 "method": "nvmf_set_crdt", 00:21:37.035 "params": { 00:21:37.035 "crdt1": 0, 00:21:37.035 "crdt2": 0, 00:21:37.035 "crdt3": 0 00:21:37.035 } 00:21:37.035 }, 00:21:37.035 { 00:21:37.035 "method": "nvmf_create_transport", 00:21:37.035 "params": { 00:21:37.035 "trtype": "TCP", 00:21:37.035 "max_queue_depth": 128, 00:21:37.035 "max_io_qpairs_per_ctrlr": 127, 00:21:37.035 "in_capsule_data_size": 4096, 00:21:37.035 "max_io_size": 131072, 00:21:37.035 "io_unit_size": 131072, 00:21:37.035 "max_aq_depth": 128, 00:21:37.035 "num_shared_buffers": 511, 00:21:37.035 "buf_cache_size": 4294967295, 00:21:37.035 "dif_insert_or_strip": false, 00:21:37.035 "zcopy": false, 00:21:37.035 "c2h_success": false, 00:21:37.035 "sock_priority": 0, 00:21:37.035 "abort_timeout_sec": 1, 00:21:37.035 "ack_timeout": 0, 00:21:37.035 "data_wr_pool_size": 0 00:21:37.035 } 00:21:37.035 }, 00:21:37.035 { 00:21:37.035 "method": "nvmf_create_subsystem", 00:21:37.035 "params": { 00:21:37.035 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.035 "allow_any_host": false, 00:21:37.035 "serial_number": "SPDK00000000000001", 00:21:37.035 "model_number": "SPDK bdev Controller", 00:21:37.035 "max_namespaces": 10, 00:21:37.035 "min_cntlid": 1, 00:21:37.035 "max_cntlid": 65519, 00:21:37.035 "ana_reporting": false 00:21:37.035 } 00:21:37.035 }, 00:21:37.035 { 00:21:37.035 "method": "nvmf_subsystem_add_host", 00:21:37.035 "params": { 00:21:37.035 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.035 "host": "nqn.2016-06.io.spdk:host1", 00:21:37.035 "psk": "key0" 00:21:37.035 } 00:21:37.035 }, 00:21:37.035 { 00:21:37.035 "method": "nvmf_subsystem_add_ns", 00:21:37.035 "params": { 00:21:37.035 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.035 "namespace": { 00:21:37.035 "nsid": 1, 00:21:37.035 "bdev_name": "malloc0", 00:21:37.035 "nguid": "E53833F15AEF427F917503051CDAC5CA", 00:21:37.035 "uuid": "e53833f1-5aef-427f-9175-03051cdac5ca", 00:21:37.035 "no_auto_visible": false 00:21:37.035 } 00:21:37.035 } 00:21:37.035 }, 00:21:37.035 { 00:21:37.035 "method": "nvmf_subsystem_add_listener", 00:21:37.035 "params": { 00:21:37.035 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.035 "listen_address": { 00:21:37.035 "trtype": "TCP", 00:21:37.035 "adrfam": "IPv4", 00:21:37.035 "traddr": "10.0.0.2", 00:21:37.035 "trsvcid": "4420" 00:21:37.035 }, 00:21:37.035 "secure_channel": true 00:21:37.035 } 00:21:37.035 } 00:21:37.035 ] 00:21:37.035 } 00:21:37.035 ] 00:21:37.035 }' 00:21:37.035 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:37.296 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:37.296 "subsystems": [ 00:21:37.296 { 00:21:37.296 "subsystem": "keyring", 00:21:37.296 "config": [ 00:21:37.296 { 00:21:37.296 "method": "keyring_file_add_key", 00:21:37.296 "params": { 00:21:37.296 "name": "key0", 00:21:37.296 "path": "/tmp/tmp.P9dZDKvZGJ" 00:21:37.296 } 00:21:37.296 } 00:21:37.296 ] 00:21:37.296 }, 00:21:37.296 { 00:21:37.296 "subsystem": "iobuf", 00:21:37.296 "config": [ 00:21:37.296 { 00:21:37.296 "method": "iobuf_set_options", 00:21:37.296 "params": { 00:21:37.296 "small_pool_count": 8192, 00:21:37.296 "large_pool_count": 1024, 00:21:37.296 "small_bufsize": 8192, 00:21:37.296 "large_bufsize": 135168, 00:21:37.296 "enable_numa": false 00:21:37.296 } 00:21:37.296 } 00:21:37.296 ] 00:21:37.296 }, 00:21:37.296 { 00:21:37.296 "subsystem": "sock", 00:21:37.296 "config": [ 00:21:37.296 { 00:21:37.296 "method": "sock_set_default_impl", 00:21:37.296 "params": { 00:21:37.296 "impl_name": "posix" 00:21:37.296 } 00:21:37.296 }, 00:21:37.296 { 00:21:37.296 "method": "sock_impl_set_options", 00:21:37.296 "params": { 00:21:37.296 "impl_name": "ssl", 00:21:37.296 "recv_buf_size": 4096, 00:21:37.296 "send_buf_size": 4096, 00:21:37.296 "enable_recv_pipe": true, 00:21:37.296 "enable_quickack": false, 00:21:37.296 "enable_placement_id": 0, 00:21:37.296 "enable_zerocopy_send_server": true, 00:21:37.296 "enable_zerocopy_send_client": false, 00:21:37.296 "zerocopy_threshold": 0, 00:21:37.296 "tls_version": 0, 00:21:37.296 "enable_ktls": false 00:21:37.296 } 00:21:37.296 }, 00:21:37.296 { 00:21:37.296 "method": "sock_impl_set_options", 00:21:37.296 "params": { 00:21:37.296 "impl_name": "posix", 00:21:37.296 "recv_buf_size": 2097152, 00:21:37.296 "send_buf_size": 2097152, 00:21:37.297 "enable_recv_pipe": true, 00:21:37.297 "enable_quickack": false, 00:21:37.297 "enable_placement_id": 0, 00:21:37.297 "enable_zerocopy_send_server": true, 00:21:37.297 "enable_zerocopy_send_client": false, 00:21:37.297 "zerocopy_threshold": 0, 00:21:37.297 "tls_version": 0, 00:21:37.297 "enable_ktls": false 00:21:37.297 } 00:21:37.297 } 00:21:37.297 ] 00:21:37.297 }, 00:21:37.297 { 00:21:37.297 "subsystem": "vmd", 00:21:37.297 "config": [] 00:21:37.297 }, 00:21:37.297 { 00:21:37.297 "subsystem": "accel", 00:21:37.297 "config": [ 00:21:37.297 { 00:21:37.297 "method": "accel_set_options", 00:21:37.297 "params": { 00:21:37.297 "small_cache_size": 128, 00:21:37.297 "large_cache_size": 16, 00:21:37.297 "task_count": 2048, 00:21:37.297 "sequence_count": 2048, 00:21:37.297 "buf_count": 2048 00:21:37.297 } 00:21:37.297 } 00:21:37.297 ] 00:21:37.297 }, 00:21:37.297 { 00:21:37.297 "subsystem": "bdev", 00:21:37.297 "config": [ 00:21:37.297 { 00:21:37.297 "method": "bdev_set_options", 00:21:37.297 "params": { 00:21:37.297 "bdev_io_pool_size": 65535, 00:21:37.297 "bdev_io_cache_size": 256, 00:21:37.297 "bdev_auto_examine": true, 00:21:37.297 "iobuf_small_cache_size": 128, 00:21:37.297 "iobuf_large_cache_size": 16 00:21:37.297 } 00:21:37.297 }, 00:21:37.297 { 00:21:37.297 "method": "bdev_raid_set_options", 00:21:37.297 "params": { 00:21:37.297 "process_window_size_kb": 1024, 00:21:37.297 "process_max_bandwidth_mb_sec": 0 00:21:37.297 } 00:21:37.297 }, 00:21:37.297 { 00:21:37.297 "method": "bdev_iscsi_set_options", 00:21:37.297 "params": { 00:21:37.297 "timeout_sec": 30 00:21:37.297 } 00:21:37.297 }, 00:21:37.297 { 00:21:37.297 "method": "bdev_nvme_set_options", 00:21:37.297 "params": { 00:21:37.297 "action_on_timeout": "none", 00:21:37.297 "timeout_us": 0, 00:21:37.297 "timeout_admin_us": 0, 00:21:37.297 "keep_alive_timeout_ms": 10000, 00:21:37.297 "arbitration_burst": 0, 00:21:37.297 "low_priority_weight": 0, 00:21:37.297 "medium_priority_weight": 0, 00:21:37.297 "high_priority_weight": 0, 00:21:37.297 "nvme_adminq_poll_period_us": 10000, 00:21:37.297 "nvme_ioq_poll_period_us": 0, 00:21:37.297 "io_queue_requests": 512, 00:21:37.297 "delay_cmd_submit": true, 00:21:37.297 "transport_retry_count": 4, 00:21:37.297 "bdev_retry_count": 3, 00:21:37.297 "transport_ack_timeout": 0, 00:21:37.297 "ctrlr_loss_timeout_sec": 0, 00:21:37.297 "reconnect_delay_sec": 0, 00:21:37.297 "fast_io_fail_timeout_sec": 0, 00:21:37.297 "disable_auto_failback": false, 00:21:37.297 "generate_uuids": false, 00:21:37.297 "transport_tos": 0, 00:21:37.297 "nvme_error_stat": false, 00:21:37.297 "rdma_srq_size": 0, 00:21:37.297 "io_path_stat": false, 00:21:37.297 "allow_accel_sequence": false, 00:21:37.297 "rdma_max_cq_size": 0, 00:21:37.297 "rdma_cm_event_timeout_ms": 0, 00:21:37.297 "dhchap_digests": [ 00:21:37.297 "sha256", 00:21:37.297 "sha384", 00:21:37.297 "sha512" 00:21:37.297 ], 00:21:37.297 "dhchap_dhgroups": [ 00:21:37.297 "null", 00:21:37.297 "ffdhe2048", 00:21:37.297 "ffdhe3072", 00:21:37.297 "ffdhe4096", 00:21:37.297 "ffdhe6144", 00:21:37.297 "ffdhe8192" 00:21:37.297 ] 00:21:37.297 } 00:21:37.297 }, 00:21:37.297 { 00:21:37.297 "method": "bdev_nvme_attach_controller", 00:21:37.297 "params": { 00:21:37.297 "name": "TLSTEST", 00:21:37.297 "trtype": "TCP", 00:21:37.297 "adrfam": "IPv4", 00:21:37.297 "traddr": "10.0.0.2", 00:21:37.297 "trsvcid": "4420", 00:21:37.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.297 "prchk_reftag": false, 00:21:37.297 "prchk_guard": false, 00:21:37.297 "ctrlr_loss_timeout_sec": 0, 00:21:37.297 "reconnect_delay_sec": 0, 00:21:37.297 "fast_io_fail_timeout_sec": 0, 00:21:37.297 "psk": "key0", 00:21:37.297 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:37.297 "hdgst": false, 00:21:37.297 "ddgst": false, 00:21:37.297 "multipath": "multipath" 00:21:37.297 } 00:21:37.297 }, 00:21:37.297 { 00:21:37.297 "method": "bdev_nvme_set_hotplug", 00:21:37.297 "params": { 00:21:37.297 "period_us": 100000, 00:21:37.297 "enable": false 00:21:37.297 } 00:21:37.297 }, 00:21:37.297 { 00:21:37.297 "method": "bdev_wait_for_examine" 00:21:37.297 } 00:21:37.297 ] 00:21:37.297 }, 00:21:37.297 { 00:21:37.297 "subsystem": "nbd", 00:21:37.297 "config": [] 00:21:37.297 } 00:21:37.297 ] 00:21:37.297 }' 00:21:37.297 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2120730 00:21:37.297 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2120730 ']' 00:21:37.297 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2120730 00:21:37.297 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:37.297 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.297 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2120730 00:21:37.297 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:37.297 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:37.297 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2120730' 00:21:37.297 killing process with pid 2120730 00:21:37.297 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2120730 00:21:37.297 Received shutdown signal, test time was about 10.000000 seconds 00:21:37.297 00:21:37.297 Latency(us) 00:21:37.297 [2024-12-05T20:14:38.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.297 [2024-12-05T20:14:38.734Z] =================================================================================================================== 00:21:37.297 [2024-12-05T20:14:38.734Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:37.297 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2120730 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2120364 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2120364 ']' 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2120364 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2120364 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2120364' 00:21:37.558 killing process with pid 2120364 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2120364 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2120364 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.558 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:37.558 "subsystems": [ 00:21:37.558 { 00:21:37.558 "subsystem": "keyring", 00:21:37.558 "config": [ 00:21:37.558 { 00:21:37.558 "method": "keyring_file_add_key", 00:21:37.558 "params": { 00:21:37.558 "name": "key0", 00:21:37.558 "path": "/tmp/tmp.P9dZDKvZGJ" 00:21:37.558 } 00:21:37.558 } 00:21:37.558 ] 00:21:37.558 }, 00:21:37.558 { 00:21:37.558 "subsystem": "iobuf", 00:21:37.558 "config": [ 00:21:37.558 { 00:21:37.558 "method": "iobuf_set_options", 00:21:37.558 "params": { 00:21:37.558 "small_pool_count": 8192, 00:21:37.558 "large_pool_count": 1024, 00:21:37.558 "small_bufsize": 8192, 00:21:37.558 "large_bufsize": 135168, 00:21:37.558 "enable_numa": false 00:21:37.558 } 00:21:37.558 } 00:21:37.558 ] 00:21:37.558 }, 00:21:37.558 { 00:21:37.558 "subsystem": "sock", 00:21:37.558 "config": [ 00:21:37.558 { 00:21:37.558 "method": "sock_set_default_impl", 00:21:37.558 "params": { 00:21:37.558 "impl_name": "posix" 00:21:37.558 } 00:21:37.558 }, 00:21:37.558 { 00:21:37.558 "method": "sock_impl_set_options", 00:21:37.558 "params": { 00:21:37.558 "impl_name": "ssl", 00:21:37.558 "recv_buf_size": 4096, 00:21:37.558 "send_buf_size": 4096, 00:21:37.558 "enable_recv_pipe": true, 00:21:37.558 "enable_quickack": false, 00:21:37.558 "enable_placement_id": 0, 00:21:37.558 "enable_zerocopy_send_server": true, 00:21:37.558 "enable_zerocopy_send_client": false, 00:21:37.558 "zerocopy_threshold": 0, 00:21:37.558 "tls_version": 0, 00:21:37.558 "enable_ktls": false 00:21:37.558 } 00:21:37.558 }, 00:21:37.558 { 00:21:37.558 "method": "sock_impl_set_options", 00:21:37.558 "params": { 00:21:37.558 "impl_name": "posix", 00:21:37.558 "recv_buf_size": 2097152, 00:21:37.559 "send_buf_size": 2097152, 00:21:37.559 "enable_recv_pipe": true, 00:21:37.559 "enable_quickack": false, 00:21:37.559 "enable_placement_id": 0, 00:21:37.559 "enable_zerocopy_send_server": true, 00:21:37.559 "enable_zerocopy_send_client": false, 00:21:37.559 "zerocopy_threshold": 0, 00:21:37.559 "tls_version": 0, 00:21:37.559 "enable_ktls": false 00:21:37.559 } 00:21:37.559 } 00:21:37.559 ] 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "subsystem": "vmd", 00:21:37.559 "config": [] 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "subsystem": "accel", 00:21:37.559 "config": [ 00:21:37.559 { 00:21:37.559 "method": "accel_set_options", 00:21:37.559 "params": { 00:21:37.559 "small_cache_size": 128, 00:21:37.559 "large_cache_size": 16, 00:21:37.559 "task_count": 2048, 00:21:37.559 "sequence_count": 2048, 00:21:37.559 "buf_count": 2048 00:21:37.559 } 00:21:37.559 } 00:21:37.559 ] 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "subsystem": "bdev", 00:21:37.559 "config": [ 00:21:37.559 { 00:21:37.559 "method": "bdev_set_options", 00:21:37.559 "params": { 00:21:37.559 "bdev_io_pool_size": 65535, 00:21:37.559 "bdev_io_cache_size": 256, 00:21:37.559 "bdev_auto_examine": true, 00:21:37.559 "iobuf_small_cache_size": 128, 00:21:37.559 "iobuf_large_cache_size": 16 00:21:37.559 } 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "method": "bdev_raid_set_options", 00:21:37.559 "params": { 00:21:37.559 "process_window_size_kb": 1024, 00:21:37.559 "process_max_bandwidth_mb_sec": 0 00:21:37.559 } 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "method": "bdev_iscsi_set_options", 00:21:37.559 "params": { 00:21:37.559 "timeout_sec": 30 00:21:37.559 } 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "method": "bdev_nvme_set_options", 00:21:37.559 "params": { 00:21:37.559 "action_on_timeout": "none", 00:21:37.559 "timeout_us": 0, 00:21:37.559 "timeout_admin_us": 0, 00:21:37.559 "keep_alive_timeout_ms": 10000, 00:21:37.559 "arbitration_burst": 0, 00:21:37.559 "low_priority_weight": 0, 00:21:37.559 "medium_priority_weight": 0, 00:21:37.559 "high_priority_weight": 0, 00:21:37.559 "nvme_adminq_poll_period_us": 10000, 00:21:37.559 "nvme_ioq_poll_period_us": 0, 00:21:37.559 "io_queue_requests": 0, 00:21:37.559 "delay_cmd_submit": true, 00:21:37.559 "transport_retry_count": 4, 00:21:37.559 "bdev_retry_count": 3, 00:21:37.559 "transport_ack_timeout": 0, 00:21:37.559 "ctrlr_loss_timeout_sec": 0, 00:21:37.559 "reconnect_delay_sec": 0, 00:21:37.559 "fast_io_fail_timeout_sec": 0, 00:21:37.559 "disable_auto_failback": false, 00:21:37.559 "generate_uuids": false, 00:21:37.559 "transport_tos": 0, 00:21:37.559 "nvme_error_stat": false, 00:21:37.559 "rdma_srq_size": 0, 00:21:37.559 "io_path_stat": false, 00:21:37.559 "allow_accel_sequence": false, 00:21:37.559 "rdma_max_cq_size": 0, 00:21:37.559 "rdma_cm_event_timeout_ms": 0, 00:21:37.559 "dhchap_digests": [ 00:21:37.559 "sha256", 00:21:37.559 "sha384", 00:21:37.559 "sha512" 00:21:37.559 ], 00:21:37.559 "dhchap_dhgroups": [ 00:21:37.559 "null", 00:21:37.559 "ffdhe2048", 00:21:37.559 "ffdhe3072", 00:21:37.559 "ffdhe4096", 00:21:37.559 "ffdhe6144", 00:21:37.559 "ffdhe8192" 00:21:37.559 ] 00:21:37.559 } 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "method": "bdev_nvme_set_hotplug", 00:21:37.559 "params": { 00:21:37.559 "period_us": 100000, 00:21:37.559 "enable": false 00:21:37.559 } 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "method": "bdev_malloc_create", 00:21:37.559 "params": { 00:21:37.559 "name": "malloc0", 00:21:37.559 "num_blocks": 8192, 00:21:37.559 "block_size": 4096, 00:21:37.559 "physical_block_size": 4096, 00:21:37.559 "uuid": "e53833f1-5aef-427f-9175-03051cdac5ca", 00:21:37.559 "optimal_io_boundary": 0, 00:21:37.559 "md_size": 0, 00:21:37.559 "dif_type": 0, 00:21:37.559 "dif_is_head_of_md": false, 00:21:37.559 "dif_pi_format": 0 00:21:37.559 } 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "method": "bdev_wait_for_examine" 00:21:37.559 } 00:21:37.559 ] 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "subsystem": "nbd", 00:21:37.559 "config": [] 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "subsystem": "scheduler", 00:21:37.559 "config": [ 00:21:37.559 { 00:21:37.559 "method": "framework_set_scheduler", 00:21:37.559 "params": { 00:21:37.559 "name": "static" 00:21:37.559 } 00:21:37.559 } 00:21:37.559 ] 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "subsystem": "nvmf", 00:21:37.559 "config": [ 00:21:37.559 { 00:21:37.559 "method": "nvmf_set_config", 00:21:37.559 "params": { 00:21:37.559 "discovery_filter": "match_any", 00:21:37.559 "admin_cmd_passthru": { 00:21:37.559 "identify_ctrlr": false 00:21:37.559 }, 00:21:37.559 "dhchap_digests": [ 00:21:37.559 "sha256", 00:21:37.559 "sha384", 00:21:37.559 "sha512" 00:21:37.559 ], 00:21:37.559 "dhchap_dhgroups": [ 00:21:37.559 "null", 00:21:37.559 "ffdhe2048", 00:21:37.559 "ffdhe3072", 00:21:37.559 "ffdhe4096", 00:21:37.559 "ffdhe6144", 00:21:37.559 "ffdhe8192" 00:21:37.559 ] 00:21:37.559 } 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "method": "nvmf_set_max_subsystems", 00:21:37.559 "params": { 00:21:37.559 "max_subsystems": 1024 00:21:37.559 } 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "method": "nvmf_set_crdt", 00:21:37.559 "params": { 00:21:37.559 "crdt1": 0, 00:21:37.559 "crdt2": 0, 00:21:37.559 "crdt3": 0 00:21:37.559 } 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "method": "nvmf_create_transport", 00:21:37.559 "params": { 00:21:37.559 "trtype": "TCP", 00:21:37.559 "max_queue_depth": 128, 00:21:37.559 "max_io_qpairs_per_ctrlr": 127, 00:21:37.559 "in_capsule_data_size": 4096, 00:21:37.559 "max_io_size": 131072, 00:21:37.559 "io_unit_size": 131072, 00:21:37.559 "max_aq_depth": 128, 00:21:37.559 "num_shared_buffers": 511, 00:21:37.559 "buf_cache_size": 4294967295, 00:21:37.559 "dif_insert_or_strip": false, 00:21:37.559 "zcopy": false, 00:21:37.559 "c2h_success": false, 00:21:37.559 "sock_priority": 0, 00:21:37.559 "abort_timeout_sec": 1, 00:21:37.559 "ack_timeout": 0, 00:21:37.559 "data_wr_pool_size": 0 00:21:37.559 } 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "method": "nvmf_create_subsystem", 00:21:37.559 "params": { 00:21:37.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.559 "allow_any_host": false, 00:21:37.559 "serial_number": "SPDK00000000000001", 00:21:37.559 "model_number": "SPDK bdev Controller", 00:21:37.559 "max_namespaces": 10, 00:21:37.559 "min_cntlid": 1, 00:21:37.559 "max_cntlid": 65519, 00:21:37.559 "ana_reporting": false 00:21:37.559 } 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "method": "nvmf_subsystem_add_host", 00:21:37.559 "params": { 00:21:37.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.559 "host": "nqn.2016-06.io.spdk:host1", 00:21:37.559 "psk": "key0" 00:21:37.559 } 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "method": "nvmf_subsystem_add_ns", 00:21:37.559 "params": { 00:21:37.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.559 "namespace": { 00:21:37.559 "nsid": 1, 00:21:37.559 "bdev_name": "malloc0", 00:21:37.559 "nguid": "E53833F15AEF427F917503051CDAC5CA", 00:21:37.559 "uuid": "e53833f1-5aef-427f-9175-03051cdac5ca", 00:21:37.559 "no_auto_visible": false 00:21:37.559 } 00:21:37.559 } 00:21:37.559 }, 00:21:37.559 { 00:21:37.559 "method": "nvmf_subsystem_add_listener", 00:21:37.559 "params": { 00:21:37.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.559 "listen_address": { 00:21:37.559 "trtype": "TCP", 00:21:37.559 "adrfam": "IPv4", 00:21:37.560 "traddr": "10.0.0.2", 00:21:37.560 "trsvcid": "4420" 00:21:37.560 }, 00:21:37.560 "secure_channel": true 00:21:37.560 } 00:21:37.560 } 00:21:37.560 ] 00:21:37.560 } 00:21:37.560 ] 00:21:37.560 }' 00:21:37.560 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2121079 00:21:37.560 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2121079 00:21:37.560 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:37.560 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2121079 ']' 00:21:37.560 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.560 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.560 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.560 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.560 21:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.820 [2024-12-05 21:14:39.032451] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:37.820 [2024-12-05 21:14:39.032508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.820 [2024-12-05 21:14:39.129031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.820 [2024-12-05 21:14:39.157419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.820 [2024-12-05 21:14:39.157448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.820 [2024-12-05 21:14:39.157454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.820 [2024-12-05 21:14:39.157459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.820 [2024-12-05 21:14:39.157463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.820 [2024-12-05 21:14:39.157953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.081 [2024-12-05 21:14:39.352260] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.081 [2024-12-05 21:14:39.384292] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:38.081 [2024-12-05 21:14:39.384493] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2121370 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2121370 /var/tmp/bdevperf.sock 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2121370 ']' 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.650 21:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:38.650 "subsystems": [ 00:21:38.650 { 00:21:38.650 "subsystem": "keyring", 00:21:38.650 "config": [ 00:21:38.650 { 00:21:38.650 "method": "keyring_file_add_key", 00:21:38.650 "params": { 00:21:38.650 "name": "key0", 00:21:38.650 "path": "/tmp/tmp.P9dZDKvZGJ" 00:21:38.650 } 00:21:38.650 } 00:21:38.650 ] 00:21:38.650 }, 00:21:38.650 { 00:21:38.650 "subsystem": "iobuf", 00:21:38.650 "config": [ 00:21:38.650 { 00:21:38.650 "method": "iobuf_set_options", 00:21:38.650 "params": { 00:21:38.650 "small_pool_count": 8192, 00:21:38.650 "large_pool_count": 1024, 00:21:38.650 "small_bufsize": 8192, 00:21:38.650 "large_bufsize": 135168, 00:21:38.650 "enable_numa": false 00:21:38.650 } 00:21:38.650 } 00:21:38.650 ] 00:21:38.650 }, 00:21:38.650 { 00:21:38.650 "subsystem": "sock", 00:21:38.650 "config": [ 00:21:38.650 { 00:21:38.650 "method": "sock_set_default_impl", 00:21:38.650 "params": { 00:21:38.650 "impl_name": "posix" 00:21:38.650 } 00:21:38.650 }, 00:21:38.650 { 00:21:38.650 "method": "sock_impl_set_options", 00:21:38.650 "params": { 00:21:38.650 "impl_name": "ssl", 00:21:38.650 "recv_buf_size": 4096, 00:21:38.650 "send_buf_size": 4096, 00:21:38.650 "enable_recv_pipe": true, 00:21:38.650 "enable_quickack": false, 00:21:38.650 "enable_placement_id": 0, 00:21:38.650 "enable_zerocopy_send_server": true, 00:21:38.650 "enable_zerocopy_send_client": false, 00:21:38.650 "zerocopy_threshold": 0, 00:21:38.650 "tls_version": 0, 00:21:38.650 "enable_ktls": false 00:21:38.650 } 00:21:38.650 }, 00:21:38.650 { 00:21:38.650 "method": "sock_impl_set_options", 00:21:38.650 "params": { 00:21:38.650 "impl_name": "posix", 00:21:38.650 "recv_buf_size": 2097152, 00:21:38.650 "send_buf_size": 2097152, 00:21:38.650 "enable_recv_pipe": true, 00:21:38.650 "enable_quickack": false, 00:21:38.650 "enable_placement_id": 0, 00:21:38.650 "enable_zerocopy_send_server": true, 00:21:38.650 "enable_zerocopy_send_client": false, 00:21:38.650 "zerocopy_threshold": 0, 00:21:38.650 "tls_version": 0, 00:21:38.650 "enable_ktls": false 00:21:38.650 } 00:21:38.650 } 00:21:38.650 ] 00:21:38.650 }, 00:21:38.650 { 00:21:38.650 "subsystem": "vmd", 00:21:38.650 "config": [] 00:21:38.650 }, 00:21:38.650 { 00:21:38.650 "subsystem": "accel", 00:21:38.650 "config": [ 00:21:38.650 { 00:21:38.650 "method": "accel_set_options", 00:21:38.650 "params": { 00:21:38.650 "small_cache_size": 128, 00:21:38.650 "large_cache_size": 16, 00:21:38.650 "task_count": 2048, 00:21:38.650 "sequence_count": 2048, 00:21:38.650 "buf_count": 2048 00:21:38.650 } 00:21:38.650 } 00:21:38.650 ] 00:21:38.650 }, 00:21:38.650 { 00:21:38.650 "subsystem": "bdev", 00:21:38.650 "config": [ 00:21:38.650 { 00:21:38.650 "method": "bdev_set_options", 00:21:38.650 "params": { 00:21:38.650 "bdev_io_pool_size": 65535, 00:21:38.650 "bdev_io_cache_size": 256, 00:21:38.650 "bdev_auto_examine": true, 00:21:38.650 "iobuf_small_cache_size": 128, 00:21:38.650 "iobuf_large_cache_size": 16 00:21:38.650 } 00:21:38.650 }, 00:21:38.650 { 00:21:38.650 "method": "bdev_raid_set_options", 00:21:38.650 "params": { 00:21:38.650 "process_window_size_kb": 1024, 00:21:38.650 "process_max_bandwidth_mb_sec": 0 00:21:38.650 } 00:21:38.650 }, 00:21:38.650 { 00:21:38.650 "method": "bdev_iscsi_set_options", 00:21:38.650 "params": { 00:21:38.650 "timeout_sec": 30 00:21:38.650 } 00:21:38.650 }, 00:21:38.650 { 00:21:38.650 "method": "bdev_nvme_set_options", 00:21:38.650 "params": { 00:21:38.650 "action_on_timeout": "none", 00:21:38.650 "timeout_us": 0, 00:21:38.650 "timeout_admin_us": 0, 00:21:38.650 "keep_alive_timeout_ms": 10000, 00:21:38.650 "arbitration_burst": 0, 00:21:38.650 "low_priority_weight": 0, 00:21:38.650 "medium_priority_weight": 0, 00:21:38.650 "high_priority_weight": 0, 00:21:38.650 "nvme_adminq_poll_period_us": 10000, 00:21:38.650 "nvme_ioq_poll_period_us": 0, 00:21:38.650 "io_queue_requests": 512, 00:21:38.650 "delay_cmd_submit": true, 00:21:38.650 "transport_retry_count": 4, 00:21:38.650 "bdev_retry_count": 3, 00:21:38.650 "transport_ack_timeout": 0, 00:21:38.650 "ctrlr_loss_timeout_sec": 0, 00:21:38.650 "reconnect_delay_sec": 0, 00:21:38.650 "fast_io_fail_timeout_sec": 0, 00:21:38.650 "disable_auto_failback": false, 00:21:38.651 "generate_uuids": false, 00:21:38.651 "transport_tos": 0, 00:21:38.651 "nvme_error_stat": false, 00:21:38.651 "rdma_srq_size": 0, 00:21:38.651 "io_path_stat": false, 00:21:38.651 "allow_accel_sequence": false, 00:21:38.651 "rdma_max_cq_size": 0, 00:21:38.651 "rdma_cm_event_timeout_ms": 0, 00:21:38.651 "dhchap_digests": [ 00:21:38.651 "sha256", 00:21:38.651 "sha384", 00:21:38.651 "sha512" 00:21:38.651 ], 00:21:38.651 "dhchap_dhgroups": [ 00:21:38.651 "null", 00:21:38.651 "ffdhe2048", 00:21:38.651 "ffdhe3072", 00:21:38.651 "ffdhe4096", 00:21:38.651 "ffdhe6144", 00:21:38.651 "ffdhe8192" 00:21:38.651 ] 00:21:38.651 } 00:21:38.651 }, 00:21:38.651 { 00:21:38.651 "method": "bdev_nvme_attach_controller", 00:21:38.651 "params": { 00:21:38.651 "name": "TLSTEST", 00:21:38.651 "trtype": "TCP", 00:21:38.651 "adrfam": "IPv4", 00:21:38.651 "traddr": "10.0.0.2", 00:21:38.651 "trsvcid": "4420", 00:21:38.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.651 "prchk_reftag": false, 00:21:38.651 "prchk_guard": false, 00:21:38.651 "ctrlr_loss_timeout_sec": 0, 00:21:38.651 "reconnect_delay_sec": 0, 00:21:38.651 "fast_io_fail_timeout_sec": 0, 00:21:38.651 "psk": "key0", 00:21:38.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:38.651 "hdgst": false, 00:21:38.651 "ddgst": false, 00:21:38.651 "multipath": "multipath" 00:21:38.651 } 00:21:38.651 }, 00:21:38.651 { 00:21:38.651 "method": "bdev_nvme_set_hotplug", 00:21:38.651 "params": { 00:21:38.651 "period_us": 100000, 00:21:38.651 "enable": false 00:21:38.651 } 00:21:38.651 }, 00:21:38.651 { 00:21:38.651 "method": "bdev_wait_for_examine" 00:21:38.651 } 00:21:38.651 ] 00:21:38.651 }, 00:21:38.651 { 00:21:38.651 "subsystem": "nbd", 00:21:38.651 "config": [] 00:21:38.651 } 00:21:38.651 ] 00:21:38.651 }' 00:21:38.651 [2024-12-05 21:14:39.907714] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:38.651 [2024-12-05 21:14:39.907770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121370 ] 00:21:38.651 [2024-12-05 21:14:39.970931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.651 [2024-12-05 21:14:39.999945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.910 [2024-12-05 21:14:40.139308] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:39.479 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.479 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:39.479 21:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:39.479 Running I/O for 10 seconds... 00:21:41.362 5801.00 IOPS, 22.66 MiB/s [2024-12-05T20:14:44.185Z] 5340.00 IOPS, 20.86 MiB/s [2024-12-05T20:14:45.126Z] 5223.33 IOPS, 20.40 MiB/s [2024-12-05T20:14:46.072Z] 5357.50 IOPS, 20.93 MiB/s [2024-12-05T20:14:47.014Z] 5566.60 IOPS, 21.74 MiB/s [2024-12-05T20:14:47.956Z] 5475.83 IOPS, 21.39 MiB/s [2024-12-05T20:14:48.898Z] 5474.86 IOPS, 21.39 MiB/s [2024-12-05T20:14:49.839Z] 5548.62 IOPS, 21.67 MiB/s [2024-12-05T20:14:51.224Z] 5647.89 IOPS, 22.06 MiB/s [2024-12-05T20:14:51.224Z] 5534.30 IOPS, 21.62 MiB/s 00:21:49.787 Latency(us) 00:21:49.787 [2024-12-05T20:14:51.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.787 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:49.787 Verification LBA range: start 0x0 length 0x2000 00:21:49.787 TLSTESTn1 : 10.02 5537.93 21.63 0.00 0.00 23080.12 4696.75 23265.28 00:21:49.787 [2024-12-05T20:14:51.224Z] =================================================================================================================== 00:21:49.787 [2024-12-05T20:14:51.224Z] Total : 5537.93 21.63 0.00 0.00 23080.12 4696.75 23265.28 00:21:49.787 { 00:21:49.787 "results": [ 00:21:49.787 { 00:21:49.787 "job": "TLSTESTn1", 00:21:49.787 "core_mask": "0x4", 00:21:49.787 "workload": "verify", 00:21:49.787 "status": "finished", 00:21:49.787 "verify_range": { 00:21:49.787 "start": 0, 00:21:49.788 "length": 8192 00:21:49.788 }, 00:21:49.788 "queue_depth": 128, 00:21:49.788 "io_size": 4096, 00:21:49.788 "runtime": 10.016203, 00:21:49.788 "iops": 5537.926897048712, 00:21:49.788 "mibps": 21.63252694159653, 00:21:49.788 "io_failed": 0, 00:21:49.788 "io_timeout": 0, 00:21:49.788 "avg_latency_us": 23080.120773765524, 00:21:49.788 "min_latency_us": 4696.746666666667, 00:21:49.788 "max_latency_us": 23265.28 00:21:49.788 } 00:21:49.788 ], 00:21:49.788 "core_count": 1 00:21:49.788 } 00:21:49.788 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:49.788 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2121370 00:21:49.788 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2121370 ']' 00:21:49.788 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2121370 00:21:49.788 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:49.788 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.788 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2121370 00:21:49.788 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:49.788 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:49.788 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2121370' 00:21:49.788 killing process with pid 2121370 00:21:49.788 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2121370 00:21:49.788 Received shutdown signal, test time was about 10.000000 seconds 00:21:49.788 00:21:49.788 Latency(us) 00:21:49.788 [2024-12-05T20:14:51.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.788 [2024-12-05T20:14:51.225Z] =================================================================================================================== 00:21:49.788 [2024-12-05T20:14:51.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:49.788 21:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2121370 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2121079 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2121079 ']' 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2121079 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2121079 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2121079' 00:21:49.788 killing process with pid 2121079 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2121079 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2121079 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2123453 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2123453 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2123453 ']' 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.788 21:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.049 [2024-12-05 21:14:51.247450] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:50.049 [2024-12-05 21:14:51.247509] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.049 [2024-12-05 21:14:51.331731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.049 [2024-12-05 21:14:51.367382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.049 [2024-12-05 21:14:51.367417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.049 [2024-12-05 21:14:51.367425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.049 [2024-12-05 21:14:51.367432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.049 [2024-12-05 21:14:51.367438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.049 [2024-12-05 21:14:51.368031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.622 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.622 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:50.622 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.622 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.622 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.883 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.883 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.P9dZDKvZGJ 00:21:50.883 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.P9dZDKvZGJ 00:21:50.883 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:50.883 [2024-12-05 21:14:52.225963] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.883 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:51.144 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:51.405 [2024-12-05 21:14:52.590858] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:51.405 [2024-12-05 21:14:52.591092] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.405 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:51.405 malloc0 00:21:51.405 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:51.666 21:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.P9dZDKvZGJ 00:21:51.927 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:51.927 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2123955 00:21:51.927 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:51.927 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:51.927 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2123955 /var/tmp/bdevperf.sock 00:21:51.927 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2123955 ']' 00:21:51.928 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.928 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.928 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.928 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.928 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.190 [2024-12-05 21:14:53.373598] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:52.190 [2024-12-05 21:14:53.373642] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2123955 ] 00:21:52.190 [2024-12-05 21:14:53.457867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.190 [2024-12-05 21:14:53.487744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.190 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.190 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:52.190 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.P9dZDKvZGJ 00:21:52.451 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:52.451 [2024-12-05 21:14:53.863338] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:52.711 nvme0n1 00:21:52.711 21:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:52.711 Running I/O for 1 seconds... 00:21:53.653 3489.00 IOPS, 13.63 MiB/s 00:21:53.653 Latency(us) 00:21:53.653 [2024-12-05T20:14:55.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.653 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:53.653 Verification LBA range: start 0x0 length 0x2000 00:21:53.653 nvme0n1 : 1.03 3503.18 13.68 0.00 0.00 36052.16 6116.69 38884.69 00:21:53.653 [2024-12-05T20:14:55.090Z] =================================================================================================================== 00:21:53.653 [2024-12-05T20:14:55.090Z] Total : 3503.18 13.68 0.00 0.00 36052.16 6116.69 38884.69 00:21:53.653 { 00:21:53.653 "results": [ 00:21:53.653 { 00:21:53.653 "job": "nvme0n1", 00:21:53.653 "core_mask": "0x2", 00:21:53.653 "workload": "verify", 00:21:53.653 "status": "finished", 00:21:53.653 "verify_range": { 00:21:53.653 "start": 0, 00:21:53.653 "length": 8192 00:21:53.653 }, 00:21:53.653 "queue_depth": 128, 00:21:53.653 "io_size": 4096, 00:21:53.653 "runtime": 1.032491, 00:21:53.653 "iops": 3503.178235936197, 00:21:53.653 "mibps": 13.684289984125769, 00:21:53.653 "io_failed": 0, 00:21:53.653 "io_timeout": 0, 00:21:53.653 "avg_latency_us": 36052.157021472674, 00:21:53.653 "min_latency_us": 6116.693333333334, 00:21:53.653 "max_latency_us": 38884.693333333336 00:21:53.654 } 00:21:53.654 ], 00:21:53.654 "core_count": 1 00:21:53.654 } 00:21:53.654 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2123955 00:21:53.654 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2123955 ']' 00:21:53.654 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2123955 00:21:53.654 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:53.654 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.654 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2123955 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2123955' 00:21:53.914 killing process with pid 2123955 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2123955 00:21:53.914 Received shutdown signal, test time was about 1.000000 seconds 00:21:53.914 00:21:53.914 Latency(us) 00:21:53.914 [2024-12-05T20:14:55.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.914 [2024-12-05T20:14:55.351Z] =================================================================================================================== 00:21:53.914 [2024-12-05T20:14:55.351Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2123955 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2123453 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2123453 ']' 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2123453 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2123453 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2123453' 00:21:53.914 killing process with pid 2123453 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2123453 00:21:53.914 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2123453 00:21:54.174 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:54.174 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:54.174 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:54.174 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.174 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2124485 00:21:54.174 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2124485 00:21:54.174 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:54.174 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2124485 ']' 00:21:54.174 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.174 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.174 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.174 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.174 21:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.174 [2024-12-05 21:14:55.504236] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:54.174 [2024-12-05 21:14:55.504297] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.174 [2024-12-05 21:14:55.588781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.434 [2024-12-05 21:14:55.624925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.434 [2024-12-05 21:14:55.624959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.434 [2024-12-05 21:14:55.624968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.434 [2024-12-05 21:14:55.624976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.434 [2024-12-05 21:14:55.624983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.434 [2024-12-05 21:14:55.625581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.005 [2024-12-05 21:14:56.330990] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.005 malloc0 00:21:55.005 [2024-12-05 21:14:56.357694] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:55.005 [2024-12-05 21:14:56.357921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2124516 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2124516 /var/tmp/bdevperf.sock 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2124516 ']' 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.005 21:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.005 [2024-12-05 21:14:56.437006] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:55.005 [2024-12-05 21:14:56.437054] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124516 ] 00:21:55.266 [2024-12-05 21:14:56.526912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.266 [2024-12-05 21:14:56.557179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.837 21:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.837 21:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:55.837 21:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.P9dZDKvZGJ 00:21:56.098 21:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:56.359 [2024-12-05 21:14:57.558387] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:56.359 nvme0n1 00:21:56.359 21:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:56.359 Running I/O for 1 seconds... 00:21:57.563 4725.00 IOPS, 18.46 MiB/s 00:21:57.563 Latency(us) 00:21:57.563 [2024-12-05T20:14:59.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.563 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:57.563 Verification LBA range: start 0x0 length 0x2000 00:21:57.563 nvme0n1 : 1.02 4762.05 18.60 0.00 0.00 26634.27 6225.92 31457.28 00:21:57.563 [2024-12-05T20:14:59.000Z] =================================================================================================================== 00:21:57.563 [2024-12-05T20:14:59.000Z] Total : 4762.05 18.60 0.00 0.00 26634.27 6225.92 31457.28 00:21:57.563 { 00:21:57.563 "results": [ 00:21:57.563 { 00:21:57.563 "job": "nvme0n1", 00:21:57.563 "core_mask": "0x2", 00:21:57.563 "workload": "verify", 00:21:57.563 "status": "finished", 00:21:57.563 "verify_range": { 00:21:57.563 "start": 0, 00:21:57.563 "length": 8192 00:21:57.563 }, 00:21:57.563 "queue_depth": 128, 00:21:57.563 "io_size": 4096, 00:21:57.563 "runtime": 1.019098, 00:21:57.563 "iops": 4762.054287222622, 00:21:57.563 "mibps": 18.60177455946337, 00:21:57.563 "io_failed": 0, 00:21:57.563 "io_timeout": 0, 00:21:57.563 "avg_latency_us": 26634.26602376537, 00:21:57.563 "min_latency_us": 6225.92, 00:21:57.563 "max_latency_us": 31457.28 00:21:57.563 } 00:21:57.563 ], 00:21:57.563 "core_count": 1 00:21:57.563 } 00:21:57.563 21:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:57.563 21:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.563 21:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.563 21:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.563 21:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:57.563 "subsystems": [ 00:21:57.563 { 00:21:57.563 "subsystem": "keyring", 00:21:57.563 "config": [ 00:21:57.563 { 00:21:57.563 "method": "keyring_file_add_key", 00:21:57.563 "params": { 00:21:57.563 "name": "key0", 00:21:57.563 "path": "/tmp/tmp.P9dZDKvZGJ" 00:21:57.563 } 00:21:57.563 } 00:21:57.563 ] 00:21:57.563 }, 00:21:57.563 { 00:21:57.563 "subsystem": "iobuf", 00:21:57.563 "config": [ 00:21:57.563 { 00:21:57.563 "method": "iobuf_set_options", 00:21:57.563 "params": { 00:21:57.563 "small_pool_count": 8192, 00:21:57.563 "large_pool_count": 1024, 00:21:57.563 "small_bufsize": 8192, 00:21:57.563 "large_bufsize": 135168, 00:21:57.563 "enable_numa": false 00:21:57.563 } 00:21:57.563 } 00:21:57.563 ] 00:21:57.563 }, 00:21:57.563 { 00:21:57.563 "subsystem": "sock", 00:21:57.563 "config": [ 00:21:57.563 { 00:21:57.563 "method": "sock_set_default_impl", 00:21:57.563 "params": { 00:21:57.563 "impl_name": "posix" 00:21:57.563 } 00:21:57.563 }, 00:21:57.563 { 00:21:57.563 "method": "sock_impl_set_options", 00:21:57.563 "params": { 00:21:57.563 "impl_name": "ssl", 00:21:57.563 "recv_buf_size": 4096, 00:21:57.563 "send_buf_size": 4096, 00:21:57.563 "enable_recv_pipe": true, 00:21:57.563 "enable_quickack": false, 00:21:57.563 "enable_placement_id": 0, 00:21:57.563 "enable_zerocopy_send_server": true, 00:21:57.563 "enable_zerocopy_send_client": false, 00:21:57.563 "zerocopy_threshold": 0, 00:21:57.563 "tls_version": 0, 00:21:57.563 "enable_ktls": false 00:21:57.563 } 00:21:57.563 }, 00:21:57.563 { 00:21:57.563 "method": "sock_impl_set_options", 00:21:57.563 "params": { 00:21:57.563 "impl_name": "posix", 00:21:57.563 "recv_buf_size": 2097152, 00:21:57.563 "send_buf_size": 2097152, 00:21:57.563 "enable_recv_pipe": true, 00:21:57.563 "enable_quickack": false, 00:21:57.563 "enable_placement_id": 0, 00:21:57.563 "enable_zerocopy_send_server": true, 00:21:57.563 "enable_zerocopy_send_client": false, 00:21:57.563 "zerocopy_threshold": 0, 00:21:57.563 "tls_version": 0, 00:21:57.563 "enable_ktls": false 00:21:57.563 } 00:21:57.563 } 00:21:57.563 ] 00:21:57.563 }, 00:21:57.563 { 00:21:57.563 "subsystem": "vmd", 00:21:57.563 "config": [] 00:21:57.563 }, 00:21:57.563 { 00:21:57.563 "subsystem": "accel", 00:21:57.563 "config": [ 00:21:57.563 { 00:21:57.563 "method": "accel_set_options", 00:21:57.563 "params": { 00:21:57.563 "small_cache_size": 128, 00:21:57.563 "large_cache_size": 16, 00:21:57.563 "task_count": 2048, 00:21:57.563 "sequence_count": 2048, 00:21:57.563 "buf_count": 2048 00:21:57.563 } 00:21:57.563 } 00:21:57.563 ] 00:21:57.563 }, 00:21:57.563 { 00:21:57.563 "subsystem": "bdev", 00:21:57.563 "config": [ 00:21:57.563 { 00:21:57.563 "method": "bdev_set_options", 00:21:57.563 "params": { 00:21:57.563 "bdev_io_pool_size": 65535, 00:21:57.563 "bdev_io_cache_size": 256, 00:21:57.563 "bdev_auto_examine": true, 00:21:57.563 "iobuf_small_cache_size": 128, 00:21:57.563 "iobuf_large_cache_size": 16 00:21:57.563 } 00:21:57.563 }, 00:21:57.563 { 00:21:57.563 "method": "bdev_raid_set_options", 00:21:57.563 "params": { 00:21:57.563 "process_window_size_kb": 1024, 00:21:57.563 "process_max_bandwidth_mb_sec": 0 00:21:57.563 } 00:21:57.563 }, 00:21:57.563 { 00:21:57.563 "method": "bdev_iscsi_set_options", 00:21:57.563 "params": { 00:21:57.563 "timeout_sec": 30 00:21:57.563 } 00:21:57.563 }, 00:21:57.563 { 00:21:57.563 "method": "bdev_nvme_set_options", 00:21:57.563 "params": { 00:21:57.563 "action_on_timeout": "none", 00:21:57.563 "timeout_us": 0, 00:21:57.563 "timeout_admin_us": 0, 00:21:57.563 "keep_alive_timeout_ms": 10000, 00:21:57.563 "arbitration_burst": 0, 00:21:57.563 "low_priority_weight": 0, 00:21:57.563 "medium_priority_weight": 0, 00:21:57.563 "high_priority_weight": 0, 00:21:57.563 "nvme_adminq_poll_period_us": 10000, 00:21:57.563 "nvme_ioq_poll_period_us": 0, 00:21:57.563 "io_queue_requests": 0, 00:21:57.563 "delay_cmd_submit": true, 00:21:57.563 "transport_retry_count": 4, 00:21:57.563 "bdev_retry_count": 3, 00:21:57.563 "transport_ack_timeout": 0, 00:21:57.563 "ctrlr_loss_timeout_sec": 0, 00:21:57.563 "reconnect_delay_sec": 0, 00:21:57.563 "fast_io_fail_timeout_sec": 0, 00:21:57.563 "disable_auto_failback": false, 00:21:57.563 "generate_uuids": false, 00:21:57.563 "transport_tos": 0, 00:21:57.563 "nvme_error_stat": false, 00:21:57.563 "rdma_srq_size": 0, 00:21:57.563 "io_path_stat": false, 00:21:57.564 "allow_accel_sequence": false, 00:21:57.564 "rdma_max_cq_size": 0, 00:21:57.564 "rdma_cm_event_timeout_ms": 0, 00:21:57.564 "dhchap_digests": [ 00:21:57.564 "sha256", 00:21:57.564 "sha384", 00:21:57.564 "sha512" 00:21:57.564 ], 00:21:57.564 "dhchap_dhgroups": [ 00:21:57.564 "null", 00:21:57.564 "ffdhe2048", 00:21:57.564 "ffdhe3072", 00:21:57.564 "ffdhe4096", 00:21:57.564 "ffdhe6144", 00:21:57.564 "ffdhe8192" 00:21:57.564 ] 00:21:57.564 } 00:21:57.564 }, 00:21:57.564 { 00:21:57.564 "method": "bdev_nvme_set_hotplug", 00:21:57.564 "params": { 00:21:57.564 "period_us": 100000, 00:21:57.564 "enable": false 00:21:57.564 } 00:21:57.564 }, 00:21:57.564 { 00:21:57.564 "method": "bdev_malloc_create", 00:21:57.564 "params": { 00:21:57.564 "name": "malloc0", 00:21:57.564 "num_blocks": 8192, 00:21:57.564 "block_size": 4096, 00:21:57.564 "physical_block_size": 4096, 00:21:57.564 "uuid": "22ca7d75-1f87-46d2-a284-45bc24d6f6b7", 00:21:57.564 "optimal_io_boundary": 0, 00:21:57.564 "md_size": 0, 00:21:57.564 "dif_type": 0, 00:21:57.564 "dif_is_head_of_md": false, 00:21:57.564 "dif_pi_format": 0 00:21:57.564 } 00:21:57.564 }, 00:21:57.564 { 00:21:57.564 "method": "bdev_wait_for_examine" 00:21:57.564 } 00:21:57.564 ] 00:21:57.564 }, 00:21:57.564 { 00:21:57.564 "subsystem": "nbd", 00:21:57.564 "config": [] 00:21:57.564 }, 00:21:57.564 { 00:21:57.564 "subsystem": "scheduler", 00:21:57.564 "config": [ 00:21:57.564 { 00:21:57.564 "method": "framework_set_scheduler", 00:21:57.564 "params": { 00:21:57.564 "name": "static" 00:21:57.564 } 00:21:57.564 } 00:21:57.564 ] 00:21:57.564 }, 00:21:57.564 { 00:21:57.564 "subsystem": "nvmf", 00:21:57.564 "config": [ 00:21:57.564 { 00:21:57.564 "method": "nvmf_set_config", 00:21:57.564 "params": { 00:21:57.564 "discovery_filter": "match_any", 00:21:57.564 "admin_cmd_passthru": { 00:21:57.564 "identify_ctrlr": false 00:21:57.564 }, 00:21:57.564 "dhchap_digests": [ 00:21:57.564 "sha256", 00:21:57.564 "sha384", 00:21:57.564 "sha512" 00:21:57.564 ], 00:21:57.564 "dhchap_dhgroups": [ 00:21:57.564 "null", 00:21:57.564 "ffdhe2048", 00:21:57.564 "ffdhe3072", 00:21:57.564 "ffdhe4096", 00:21:57.564 "ffdhe6144", 00:21:57.564 "ffdhe8192" 00:21:57.564 ] 00:21:57.564 } 00:21:57.564 }, 00:21:57.564 { 00:21:57.564 "method": "nvmf_set_max_subsystems", 00:21:57.564 "params": { 00:21:57.564 "max_subsystems": 1024 00:21:57.564 } 00:21:57.564 }, 00:21:57.564 { 00:21:57.564 "method": "nvmf_set_crdt", 00:21:57.564 "params": { 00:21:57.564 "crdt1": 0, 00:21:57.564 "crdt2": 0, 00:21:57.564 "crdt3": 0 00:21:57.564 } 00:21:57.564 }, 00:21:57.564 { 00:21:57.564 "method": "nvmf_create_transport", 00:21:57.564 "params": { 00:21:57.564 "trtype": "TCP", 00:21:57.564 "max_queue_depth": 128, 00:21:57.564 "max_io_qpairs_per_ctrlr": 127, 00:21:57.564 "in_capsule_data_size": 4096, 00:21:57.564 "max_io_size": 131072, 00:21:57.564 "io_unit_size": 131072, 00:21:57.564 "max_aq_depth": 128, 00:21:57.564 "num_shared_buffers": 511, 00:21:57.564 "buf_cache_size": 4294967295, 00:21:57.564 "dif_insert_or_strip": false, 00:21:57.564 "zcopy": false, 00:21:57.564 "c2h_success": false, 00:21:57.564 "sock_priority": 0, 00:21:57.564 "abort_timeout_sec": 1, 00:21:57.564 "ack_timeout": 0, 00:21:57.564 "data_wr_pool_size": 0 00:21:57.564 } 00:21:57.564 }, 00:21:57.564 { 00:21:57.564 "method": "nvmf_create_subsystem", 00:21:57.564 "params": { 00:21:57.564 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.564 "allow_any_host": false, 00:21:57.564 "serial_number": "00000000000000000000", 00:21:57.564 "model_number": "SPDK bdev Controller", 00:21:57.564 "max_namespaces": 32, 00:21:57.564 "min_cntlid": 1, 00:21:57.564 "max_cntlid": 65519, 00:21:57.564 "ana_reporting": false 00:21:57.564 } 00:21:57.564 }, 00:21:57.564 { 00:21:57.564 "method": "nvmf_subsystem_add_host", 00:21:57.564 "params": { 00:21:57.564 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.564 "host": "nqn.2016-06.io.spdk:host1", 00:21:57.564 "psk": "key0" 00:21:57.564 } 00:21:57.564 }, 00:21:57.564 { 00:21:57.564 "method": "nvmf_subsystem_add_ns", 00:21:57.564 "params": { 00:21:57.564 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.564 "namespace": { 00:21:57.564 "nsid": 1, 00:21:57.564 "bdev_name": "malloc0", 00:21:57.564 "nguid": "22CA7D751F8746D2A28445BC24D6F6B7", 00:21:57.564 "uuid": "22ca7d75-1f87-46d2-a284-45bc24d6f6b7", 00:21:57.564 "no_auto_visible": false 00:21:57.564 } 00:21:57.564 } 00:21:57.564 }, 00:21:57.564 { 00:21:57.564 "method": "nvmf_subsystem_add_listener", 00:21:57.564 "params": { 00:21:57.564 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.564 "listen_address": { 00:21:57.564 "trtype": "TCP", 00:21:57.564 "adrfam": "IPv4", 00:21:57.564 "traddr": "10.0.0.2", 00:21:57.564 "trsvcid": "4420" 00:21:57.564 }, 00:21:57.564 "secure_channel": false, 00:21:57.564 "sock_impl": "ssl" 00:21:57.564 } 00:21:57.564 } 00:21:57.564 ] 00:21:57.564 } 00:21:57.564 ] 00:21:57.564 }' 00:21:57.564 21:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:57.826 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:57.826 "subsystems": [ 00:21:57.826 { 00:21:57.826 "subsystem": "keyring", 00:21:57.826 "config": [ 00:21:57.826 { 00:21:57.826 "method": "keyring_file_add_key", 00:21:57.826 "params": { 00:21:57.826 "name": "key0", 00:21:57.826 "path": "/tmp/tmp.P9dZDKvZGJ" 00:21:57.826 } 00:21:57.826 } 00:21:57.826 ] 00:21:57.826 }, 00:21:57.826 { 00:21:57.826 "subsystem": "iobuf", 00:21:57.826 "config": [ 00:21:57.826 { 00:21:57.826 "method": "iobuf_set_options", 00:21:57.826 "params": { 00:21:57.826 "small_pool_count": 8192, 00:21:57.826 "large_pool_count": 1024, 00:21:57.826 "small_bufsize": 8192, 00:21:57.826 "large_bufsize": 135168, 00:21:57.826 "enable_numa": false 00:21:57.826 } 00:21:57.826 } 00:21:57.826 ] 00:21:57.826 }, 00:21:57.826 { 00:21:57.826 "subsystem": "sock", 00:21:57.826 "config": [ 00:21:57.826 { 00:21:57.826 "method": "sock_set_default_impl", 00:21:57.826 "params": { 00:21:57.826 "impl_name": "posix" 00:21:57.826 } 00:21:57.826 }, 00:21:57.826 { 00:21:57.826 "method": "sock_impl_set_options", 00:21:57.826 "params": { 00:21:57.826 "impl_name": "ssl", 00:21:57.826 "recv_buf_size": 4096, 00:21:57.826 "send_buf_size": 4096, 00:21:57.826 "enable_recv_pipe": true, 00:21:57.826 "enable_quickack": false, 00:21:57.826 "enable_placement_id": 0, 00:21:57.826 "enable_zerocopy_send_server": true, 00:21:57.826 "enable_zerocopy_send_client": false, 00:21:57.826 "zerocopy_threshold": 0, 00:21:57.826 "tls_version": 0, 00:21:57.826 "enable_ktls": false 00:21:57.826 } 00:21:57.826 }, 00:21:57.826 { 00:21:57.826 "method": "sock_impl_set_options", 00:21:57.826 "params": { 00:21:57.826 "impl_name": "posix", 00:21:57.826 "recv_buf_size": 2097152, 00:21:57.826 "send_buf_size": 2097152, 00:21:57.826 "enable_recv_pipe": true, 00:21:57.826 "enable_quickack": false, 00:21:57.826 "enable_placement_id": 0, 00:21:57.826 "enable_zerocopy_send_server": true, 00:21:57.826 "enable_zerocopy_send_client": false, 00:21:57.826 "zerocopy_threshold": 0, 00:21:57.826 "tls_version": 0, 00:21:57.826 "enable_ktls": false 00:21:57.826 } 00:21:57.826 } 00:21:57.826 ] 00:21:57.826 }, 00:21:57.826 { 00:21:57.826 "subsystem": "vmd", 00:21:57.826 "config": [] 00:21:57.826 }, 00:21:57.826 { 00:21:57.826 "subsystem": "accel", 00:21:57.826 "config": [ 00:21:57.826 { 00:21:57.826 "method": "accel_set_options", 00:21:57.826 "params": { 00:21:57.826 "small_cache_size": 128, 00:21:57.826 "large_cache_size": 16, 00:21:57.826 "task_count": 2048, 00:21:57.826 "sequence_count": 2048, 00:21:57.826 "buf_count": 2048 00:21:57.826 } 00:21:57.826 } 00:21:57.826 ] 00:21:57.826 }, 00:21:57.826 { 00:21:57.826 "subsystem": "bdev", 00:21:57.826 "config": [ 00:21:57.826 { 00:21:57.826 "method": "bdev_set_options", 00:21:57.826 "params": { 00:21:57.826 "bdev_io_pool_size": 65535, 00:21:57.826 "bdev_io_cache_size": 256, 00:21:57.826 "bdev_auto_examine": true, 00:21:57.826 "iobuf_small_cache_size": 128, 00:21:57.826 "iobuf_large_cache_size": 16 00:21:57.826 } 00:21:57.826 }, 00:21:57.826 { 00:21:57.826 "method": "bdev_raid_set_options", 00:21:57.826 "params": { 00:21:57.826 "process_window_size_kb": 1024, 00:21:57.826 "process_max_bandwidth_mb_sec": 0 00:21:57.826 } 00:21:57.826 }, 00:21:57.826 { 00:21:57.826 "method": "bdev_iscsi_set_options", 00:21:57.826 "params": { 00:21:57.826 "timeout_sec": 30 00:21:57.826 } 00:21:57.826 }, 00:21:57.826 { 00:21:57.826 "method": "bdev_nvme_set_options", 00:21:57.826 "params": { 00:21:57.826 "action_on_timeout": "none", 00:21:57.826 "timeout_us": 0, 00:21:57.826 "timeout_admin_us": 0, 00:21:57.826 "keep_alive_timeout_ms": 10000, 00:21:57.826 "arbitration_burst": 0, 00:21:57.826 "low_priority_weight": 0, 00:21:57.826 "medium_priority_weight": 0, 00:21:57.826 "high_priority_weight": 0, 00:21:57.826 "nvme_adminq_poll_period_us": 10000, 00:21:57.826 "nvme_ioq_poll_period_us": 0, 00:21:57.826 "io_queue_requests": 512, 00:21:57.826 "delay_cmd_submit": true, 00:21:57.826 "transport_retry_count": 4, 00:21:57.826 "bdev_retry_count": 3, 00:21:57.826 "transport_ack_timeout": 0, 00:21:57.826 "ctrlr_loss_timeout_sec": 0, 00:21:57.826 "reconnect_delay_sec": 0, 00:21:57.826 "fast_io_fail_timeout_sec": 0, 00:21:57.826 "disable_auto_failback": false, 00:21:57.826 "generate_uuids": false, 00:21:57.826 "transport_tos": 0, 00:21:57.826 "nvme_error_stat": false, 00:21:57.826 "rdma_srq_size": 0, 00:21:57.826 "io_path_stat": false, 00:21:57.826 "allow_accel_sequence": false, 00:21:57.826 "rdma_max_cq_size": 0, 00:21:57.826 "rdma_cm_event_timeout_ms": 0, 00:21:57.826 "dhchap_digests": [ 00:21:57.826 "sha256", 00:21:57.826 "sha384", 00:21:57.826 "sha512" 00:21:57.826 ], 00:21:57.826 "dhchap_dhgroups": [ 00:21:57.826 "null", 00:21:57.826 "ffdhe2048", 00:21:57.826 "ffdhe3072", 00:21:57.826 "ffdhe4096", 00:21:57.826 "ffdhe6144", 00:21:57.826 "ffdhe8192" 00:21:57.826 ] 00:21:57.826 } 00:21:57.826 }, 00:21:57.826 { 00:21:57.826 "method": "bdev_nvme_attach_controller", 00:21:57.826 "params": { 00:21:57.826 "name": "nvme0", 00:21:57.826 "trtype": "TCP", 00:21:57.826 "adrfam": "IPv4", 00:21:57.826 "traddr": "10.0.0.2", 00:21:57.826 "trsvcid": "4420", 00:21:57.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.826 "prchk_reftag": false, 00:21:57.826 "prchk_guard": false, 00:21:57.826 "ctrlr_loss_timeout_sec": 0, 00:21:57.826 "reconnect_delay_sec": 0, 00:21:57.826 "fast_io_fail_timeout_sec": 0, 00:21:57.826 "psk": "key0", 00:21:57.826 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.826 "hdgst": false, 00:21:57.826 "ddgst": false, 00:21:57.826 "multipath": "multipath" 00:21:57.826 } 00:21:57.826 }, 00:21:57.826 { 00:21:57.826 "method": "bdev_nvme_set_hotplug", 00:21:57.826 "params": { 00:21:57.826 "period_us": 100000, 00:21:57.826 "enable": false 00:21:57.826 } 00:21:57.826 }, 00:21:57.826 { 00:21:57.826 "method": "bdev_enable_histogram", 00:21:57.826 "params": { 00:21:57.826 "name": "nvme0n1", 00:21:57.826 "enable": true 00:21:57.826 } 00:21:57.826 }, 00:21:57.826 { 00:21:57.826 "method": "bdev_wait_for_examine" 00:21:57.827 } 00:21:57.827 ] 00:21:57.827 }, 00:21:57.827 { 00:21:57.827 "subsystem": "nbd", 00:21:57.827 "config": [] 00:21:57.827 } 00:21:57.827 ] 00:21:57.827 }' 00:21:57.827 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2124516 00:21:57.827 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2124516 ']' 00:21:57.827 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2124516 00:21:57.827 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:57.827 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.827 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2124516 00:21:57.827 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:57.827 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:57.827 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2124516' 00:21:57.827 killing process with pid 2124516 00:21:57.827 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2124516 00:21:57.827 Received shutdown signal, test time was about 1.000000 seconds 00:21:57.827 00:21:57.827 Latency(us) 00:21:57.827 [2024-12-05T20:14:59.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.827 [2024-12-05T20:14:59.264Z] =================================================================================================================== 00:21:57.827 [2024-12-05T20:14:59.264Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:57.827 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2124516 00:21:58.088 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2124485 00:21:58.088 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2124485 ']' 00:21:58.088 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2124485 00:21:58.088 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:58.088 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.088 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2124485 00:21:58.088 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:58.088 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:58.088 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2124485' 00:21:58.088 killing process with pid 2124485 00:21:58.088 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2124485 00:21:58.088 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2124485 00:21:58.088 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:58.088 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:58.088 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:58.088 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:58.088 "subsystems": [ 00:21:58.088 { 00:21:58.088 "subsystem": "keyring", 00:21:58.088 "config": [ 00:21:58.088 { 00:21:58.088 "method": "keyring_file_add_key", 00:21:58.088 "params": { 00:21:58.088 "name": "key0", 00:21:58.088 "path": "/tmp/tmp.P9dZDKvZGJ" 00:21:58.088 } 00:21:58.088 } 00:21:58.088 ] 00:21:58.088 }, 00:21:58.088 { 00:21:58.088 "subsystem": "iobuf", 00:21:58.088 "config": [ 00:21:58.088 { 00:21:58.088 "method": "iobuf_set_options", 00:21:58.088 "params": { 00:21:58.088 "small_pool_count": 8192, 00:21:58.088 "large_pool_count": 1024, 00:21:58.088 "small_bufsize": 8192, 00:21:58.088 "large_bufsize": 135168, 00:21:58.088 "enable_numa": false 00:21:58.088 } 00:21:58.088 } 00:21:58.088 ] 00:21:58.088 }, 00:21:58.088 { 00:21:58.088 "subsystem": "sock", 00:21:58.088 "config": [ 00:21:58.088 { 00:21:58.088 "method": "sock_set_default_impl", 00:21:58.088 "params": { 00:21:58.088 "impl_name": "posix" 00:21:58.088 } 00:21:58.088 }, 00:21:58.088 { 00:21:58.088 "method": "sock_impl_set_options", 00:21:58.088 "params": { 00:21:58.088 "impl_name": "ssl", 00:21:58.088 "recv_buf_size": 4096, 00:21:58.088 "send_buf_size": 4096, 00:21:58.088 "enable_recv_pipe": true, 00:21:58.088 "enable_quickack": false, 00:21:58.088 "enable_placement_id": 0, 00:21:58.088 "enable_zerocopy_send_server": true, 00:21:58.088 "enable_zerocopy_send_client": false, 00:21:58.088 "zerocopy_threshold": 0, 00:21:58.088 "tls_version": 0, 00:21:58.088 "enable_ktls": false 00:21:58.088 } 00:21:58.088 }, 00:21:58.088 { 00:21:58.088 "method": "sock_impl_set_options", 00:21:58.088 "params": { 00:21:58.088 "impl_name": "posix", 00:21:58.088 "recv_buf_size": 2097152, 00:21:58.088 "send_buf_size": 2097152, 00:21:58.088 "enable_recv_pipe": true, 00:21:58.088 "enable_quickack": false, 00:21:58.088 "enable_placement_id": 0, 00:21:58.088 "enable_zerocopy_send_server": true, 00:21:58.088 "enable_zerocopy_send_client": false, 00:21:58.088 "zerocopy_threshold": 0, 00:21:58.088 "tls_version": 0, 00:21:58.088 "enable_ktls": false 00:21:58.088 } 00:21:58.088 } 00:21:58.088 ] 00:21:58.088 }, 00:21:58.088 { 00:21:58.088 "subsystem": "vmd", 00:21:58.088 "config": [] 00:21:58.088 }, 00:21:58.088 { 00:21:58.088 "subsystem": "accel", 00:21:58.088 "config": [ 00:21:58.088 { 00:21:58.088 "method": "accel_set_options", 00:21:58.088 "params": { 00:21:58.088 "small_cache_size": 128, 00:21:58.088 "large_cache_size": 16, 00:21:58.088 "task_count": 2048, 00:21:58.088 "sequence_count": 2048, 00:21:58.088 "buf_count": 2048 00:21:58.088 } 00:21:58.088 } 00:21:58.088 ] 00:21:58.088 }, 00:21:58.088 { 00:21:58.088 "subsystem": "bdev", 00:21:58.088 "config": [ 00:21:58.088 { 00:21:58.088 "method": "bdev_set_options", 00:21:58.088 "params": { 00:21:58.088 "bdev_io_pool_size": 65535, 00:21:58.088 "bdev_io_cache_size": 256, 00:21:58.088 "bdev_auto_examine": true, 00:21:58.088 "iobuf_small_cache_size": 128, 00:21:58.088 "iobuf_large_cache_size": 16 00:21:58.088 } 00:21:58.088 }, 00:21:58.088 { 00:21:58.088 "method": "bdev_raid_set_options", 00:21:58.088 "params": { 00:21:58.088 "process_window_size_kb": 1024, 00:21:58.088 "process_max_bandwidth_mb_sec": 0 00:21:58.088 } 00:21:58.088 }, 00:21:58.088 { 00:21:58.088 "method": "bdev_iscsi_set_options", 00:21:58.088 "params": { 00:21:58.088 "timeout_sec": 30 00:21:58.088 } 00:21:58.088 }, 00:21:58.088 { 00:21:58.088 "method": "bdev_nvme_set_options", 00:21:58.088 "params": { 00:21:58.088 "action_on_timeout": "none", 00:21:58.088 "timeout_us": 0, 00:21:58.088 "timeout_admin_us": 0, 00:21:58.088 "keep_alive_timeout_ms": 10000, 00:21:58.088 "arbitration_burst": 0, 00:21:58.088 "low_priority_weight": 0, 00:21:58.088 "medium_priority_weight": 0, 00:21:58.088 "high_priority_weight": 0, 00:21:58.088 "nvme_adminq_poll_period_us": 10000, 00:21:58.088 "nvme_ioq_poll_period_us": 0, 00:21:58.088 "io_queue_requests": 0, 00:21:58.088 "delay_cmd_submit": true, 00:21:58.088 "transport_retry_count": 4, 00:21:58.088 "bdev_retry_count": 3, 00:21:58.088 "transport_ack_timeout": 0, 00:21:58.088 "ctrlr_loss_timeout_sec": 0, 00:21:58.088 "reconnect_delay_sec": 0, 00:21:58.088 "fast_io_fail_timeout_sec": 0, 00:21:58.088 "disable_auto_failback": false, 00:21:58.088 "generate_uuids": false, 00:21:58.088 "transport_tos": 0, 00:21:58.088 "nvme_error_stat": false, 00:21:58.088 "rdma_srq_size": 0, 00:21:58.088 "io_path_stat": false, 00:21:58.088 "allow_accel_sequence": false, 00:21:58.089 "rdma_max_cq_size": 0, 00:21:58.089 "rdma_cm_event_timeout_ms": 0, 00:21:58.089 "dhchap_digests": [ 00:21:58.089 "sha256", 00:21:58.089 "sha384", 00:21:58.089 "sha512" 00:21:58.089 ], 00:21:58.089 "dhchap_dhgroups": [ 00:21:58.089 "null", 00:21:58.089 "ffdhe2048", 00:21:58.089 "ffdhe3072", 00:21:58.089 "ffdhe4096", 00:21:58.089 "ffdhe6144", 00:21:58.089 "ffdhe8192" 00:21:58.089 ] 00:21:58.089 } 00:21:58.089 }, 00:21:58.089 { 00:21:58.089 "method": "bdev_nvme_set_hotplug", 00:21:58.089 "params": { 00:21:58.089 "period_us": 100000, 00:21:58.089 "enable": false 00:21:58.089 } 00:21:58.089 }, 00:21:58.089 { 00:21:58.089 "method": "bdev_malloc_create", 00:21:58.089 "params": { 00:21:58.089 "name": "malloc0", 00:21:58.089 "num_blocks": 8192, 00:21:58.089 "block_size": 4096, 00:21:58.089 "physical_block_size": 4096, 00:21:58.089 "uuid": "22ca7d75-1f87-46d2-a284-45bc24d6f6b7", 00:21:58.089 "optimal_io_boundary": 0, 00:21:58.089 "md_size": 0, 00:21:58.089 "dif_type": 0, 00:21:58.089 "dif_is_head_of_md": false, 00:21:58.089 "dif_pi_format": 0 00:21:58.089 } 00:21:58.089 }, 00:21:58.089 { 00:21:58.089 "method": "bdev_wait_for_examine" 00:21:58.089 } 00:21:58.089 ] 00:21:58.089 }, 00:21:58.089 { 00:21:58.089 "subsystem": "nbd", 00:21:58.089 "config": [] 00:21:58.089 }, 00:21:58.089 { 00:21:58.089 "subsystem": "scheduler", 00:21:58.089 "config": [ 00:21:58.089 { 00:21:58.089 "method": "framework_set_scheduler", 00:21:58.089 "params": { 00:21:58.089 "name": "static" 00:21:58.089 } 00:21:58.089 } 00:21:58.089 ] 00:21:58.089 }, 00:21:58.089 { 00:21:58.089 "subsystem": "nvmf", 00:21:58.089 "config": [ 00:21:58.089 { 00:21:58.089 "method": "nvmf_set_config", 00:21:58.089 "params": { 00:21:58.089 "discovery_filter": "match_any", 00:21:58.089 "admin_cmd_passthru": { 00:21:58.089 "identify_ctrlr": false 00:21:58.089 }, 00:21:58.089 "dhchap_digests": [ 00:21:58.089 "sha256", 00:21:58.089 "sha384", 00:21:58.089 "sha512" 00:21:58.089 ], 00:21:58.089 "dhchap_dhgroups": [ 00:21:58.089 "null", 00:21:58.089 "ffdhe2048", 00:21:58.089 "ffdhe3072", 00:21:58.089 "ffdhe4096", 00:21:58.089 "ffdhe6144", 00:21:58.089 "ffdhe8192" 00:21:58.089 ] 00:21:58.089 } 00:21:58.089 }, 00:21:58.089 { 00:21:58.089 "method": "nvmf_set_max_subsystems", 00:21:58.089 "params": { 00:21:58.089 "max_subsystems": 1024 00:21:58.089 } 00:21:58.089 }, 00:21:58.089 { 00:21:58.089 "method": "nvmf_set_crdt", 00:21:58.089 "params": { 00:21:58.089 "crdt1": 0, 00:21:58.089 "crdt2": 0, 00:21:58.089 "crdt3": 0 00:21:58.089 } 00:21:58.089 }, 00:21:58.089 { 00:21:58.089 "method": "nvmf_create_transport", 00:21:58.089 "params": { 00:21:58.089 "trtype": "TCP", 00:21:58.089 "max_queue_depth": 128, 00:21:58.089 "max_io_qpairs_per_ctrlr": 127, 00:21:58.089 "in_capsule_data_size": 4096, 00:21:58.089 "max_io_size": 131072, 00:21:58.089 "io_unit_size": 131072, 00:21:58.089 "max_aq_depth": 128, 00:21:58.089 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.089 "num_shared_buffers": 511, 00:21:58.089 "buf_cache_size": 4294967295, 00:21:58.089 "dif_insert_or_strip": false, 00:21:58.089 "zcopy": false, 00:21:58.089 "c2h_success": false, 00:21:58.089 "sock_priority": 0, 00:21:58.089 "abort_timeout_sec": 1, 00:21:58.089 "ack_timeout": 0, 00:21:58.089 "data_wr_pool_size": 0 00:21:58.089 } 00:21:58.089 }, 00:21:58.089 { 00:21:58.089 "method": "nvmf_create_subsystem", 00:21:58.089 "params": { 00:21:58.089 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.089 "allow_any_host": false, 00:21:58.089 "serial_number": "00000000000000000000", 00:21:58.089 "model_number": "SPDK bdev Controller", 00:21:58.089 "max_namespaces": 32, 00:21:58.089 "min_cntlid": 1, 00:21:58.089 "max_cntlid": 65519, 00:21:58.089 "ana_reporting": false 00:21:58.089 } 00:21:58.089 }, 00:21:58.089 { 00:21:58.089 "method": "nvmf_subsystem_add_host", 00:21:58.089 "params": { 00:21:58.089 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.089 "host": "nqn.2016-06.io.spdk:host1", 00:21:58.089 "psk": "key0" 00:21:58.089 } 00:21:58.089 }, 00:21:58.089 { 00:21:58.089 "method": "nvmf_subsystem_add_ns", 00:21:58.089 "params": { 00:21:58.089 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.089 "namespace": { 00:21:58.089 "nsid": 1, 00:21:58.089 "bdev_name": "malloc0", 00:21:58.089 "nguid": "22CA7D751F8746D2A28445BC24D6F6B7", 00:21:58.089 "uuid": "22ca7d75-1f87-46d2-a284-45bc24d6f6b7", 00:21:58.089 "no_auto_visible": false 00:21:58.089 } 00:21:58.089 } 00:21:58.089 }, 00:21:58.089 { 00:21:58.089 "method": "nvmf_subsystem_add_listener", 00:21:58.089 "params": { 00:21:58.089 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.089 "listen_address": { 00:21:58.089 "trtype": "TCP", 00:21:58.089 "adrfam": "IPv4", 00:21:58.089 "traddr": "10.0.0.2", 00:21:58.089 "trsvcid": "4420" 00:21:58.089 }, 00:21:58.089 "secure_channel": false, 00:21:58.089 "sock_impl": "ssl" 00:21:58.089 } 00:21:58.089 } 00:21:58.089 ] 00:21:58.089 } 00:21:58.089 ] 00:21:58.089 }' 00:21:58.089 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:58.089 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2125200 00:21:58.089 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2125200 00:21:58.089 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2125200 ']' 00:21:58.089 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.089 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.089 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.089 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.089 21:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.089 [2024-12-05 21:14:59.520786] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:58.089 [2024-12-05 21:14:59.520827] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.350 [2024-12-05 21:14:59.596419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.350 [2024-12-05 21:14:59.629873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.350 [2024-12-05 21:14:59.629909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.350 [2024-12-05 21:14:59.629917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.350 [2024-12-05 21:14:59.629925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.350 [2024-12-05 21:14:59.629932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.350 [2024-12-05 21:14:59.630503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.611 [2024-12-05 21:14:59.830695] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.611 [2024-12-05 21:14:59.862707] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:58.611 [2024-12-05 21:14:59.862942] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2125467 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2125467 /var/tmp/bdevperf.sock 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2125467 ']' 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.181 21:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:59.181 "subsystems": [ 00:21:59.181 { 00:21:59.181 "subsystem": "keyring", 00:21:59.181 "config": [ 00:21:59.181 { 00:21:59.181 "method": "keyring_file_add_key", 00:21:59.181 "params": { 00:21:59.181 "name": "key0", 00:21:59.181 "path": "/tmp/tmp.P9dZDKvZGJ" 00:21:59.181 } 00:21:59.181 } 00:21:59.181 ] 00:21:59.181 }, 00:21:59.181 { 00:21:59.181 "subsystem": "iobuf", 00:21:59.181 "config": [ 00:21:59.181 { 00:21:59.181 "method": "iobuf_set_options", 00:21:59.181 "params": { 00:21:59.181 "small_pool_count": 8192, 00:21:59.181 "large_pool_count": 1024, 00:21:59.182 "small_bufsize": 8192, 00:21:59.182 "large_bufsize": 135168, 00:21:59.182 "enable_numa": false 00:21:59.182 } 00:21:59.182 } 00:21:59.182 ] 00:21:59.182 }, 00:21:59.182 { 00:21:59.182 "subsystem": "sock", 00:21:59.182 "config": [ 00:21:59.182 { 00:21:59.182 "method": "sock_set_default_impl", 00:21:59.182 "params": { 00:21:59.182 "impl_name": "posix" 00:21:59.182 } 00:21:59.182 }, 00:21:59.182 { 00:21:59.182 "method": "sock_impl_set_options", 00:21:59.182 "params": { 00:21:59.182 "impl_name": "ssl", 00:21:59.182 "recv_buf_size": 4096, 00:21:59.182 "send_buf_size": 4096, 00:21:59.182 "enable_recv_pipe": true, 00:21:59.182 "enable_quickack": false, 00:21:59.182 "enable_placement_id": 0, 00:21:59.182 "enable_zerocopy_send_server": true, 00:21:59.182 "enable_zerocopy_send_client": false, 00:21:59.182 "zerocopy_threshold": 0, 00:21:59.182 "tls_version": 0, 00:21:59.182 "enable_ktls": false 00:21:59.182 } 00:21:59.182 }, 00:21:59.182 { 00:21:59.182 "method": "sock_impl_set_options", 00:21:59.182 "params": { 00:21:59.182 "impl_name": "posix", 00:21:59.182 "recv_buf_size": 2097152, 00:21:59.182 "send_buf_size": 2097152, 00:21:59.182 "enable_recv_pipe": true, 00:21:59.182 "enable_quickack": false, 00:21:59.182 "enable_placement_id": 0, 00:21:59.182 "enable_zerocopy_send_server": true, 00:21:59.182 "enable_zerocopy_send_client": false, 00:21:59.182 "zerocopy_threshold": 0, 00:21:59.182 "tls_version": 0, 00:21:59.182 "enable_ktls": false 00:21:59.182 } 00:21:59.182 } 00:21:59.182 ] 00:21:59.182 }, 00:21:59.182 { 00:21:59.182 "subsystem": "vmd", 00:21:59.182 "config": [] 00:21:59.182 }, 00:21:59.182 { 00:21:59.182 "subsystem": "accel", 00:21:59.182 "config": [ 00:21:59.182 { 00:21:59.182 "method": "accel_set_options", 00:21:59.182 "params": { 00:21:59.182 "small_cache_size": 128, 00:21:59.182 "large_cache_size": 16, 00:21:59.182 "task_count": 2048, 00:21:59.182 "sequence_count": 2048, 00:21:59.182 "buf_count": 2048 00:21:59.182 } 00:21:59.182 } 00:21:59.182 ] 00:21:59.182 }, 00:21:59.182 { 00:21:59.182 "subsystem": "bdev", 00:21:59.182 "config": [ 00:21:59.182 { 00:21:59.182 "method": "bdev_set_options", 00:21:59.182 "params": { 00:21:59.182 "bdev_io_pool_size": 65535, 00:21:59.182 "bdev_io_cache_size": 256, 00:21:59.182 "bdev_auto_examine": true, 00:21:59.182 "iobuf_small_cache_size": 128, 00:21:59.182 "iobuf_large_cache_size": 16 00:21:59.182 } 00:21:59.182 }, 00:21:59.182 { 00:21:59.182 "method": "bdev_raid_set_options", 00:21:59.182 "params": { 00:21:59.182 "process_window_size_kb": 1024, 00:21:59.182 "process_max_bandwidth_mb_sec": 0 00:21:59.182 } 00:21:59.182 }, 00:21:59.182 { 00:21:59.182 "method": "bdev_iscsi_set_options", 00:21:59.182 "params": { 00:21:59.182 "timeout_sec": 30 00:21:59.182 } 00:21:59.182 }, 00:21:59.182 { 00:21:59.182 "method": "bdev_nvme_set_options", 00:21:59.182 "params": { 00:21:59.182 "action_on_timeout": "none", 00:21:59.182 "timeout_us": 0, 00:21:59.182 "timeout_admin_us": 0, 00:21:59.182 "keep_alive_timeout_ms": 10000, 00:21:59.182 "arbitration_burst": 0, 00:21:59.182 "low_priority_weight": 0, 00:21:59.182 "medium_priority_weight": 0, 00:21:59.182 "high_priority_weight": 0, 00:21:59.182 "nvme_adminq_poll_period_us": 10000, 00:21:59.182 "nvme_ioq_poll_period_us": 0, 00:21:59.182 "io_queue_requests": 512, 00:21:59.182 "delay_cmd_submit": true, 00:21:59.182 "transport_retry_count": 4, 00:21:59.182 "bdev_retry_count": 3, 00:21:59.182 "transport_ack_timeout": 0, 00:21:59.182 "ctrlr_loss_timeout_sec": 0, 00:21:59.182 "reconnect_delay_sec": 0, 00:21:59.182 "fast_io_fail_timeout_sec": 0, 00:21:59.182 "disable_auto_failback": false, 00:21:59.182 "generate_uuids": false, 00:21:59.182 "transport_tos": 0, 00:21:59.182 "nvme_error_stat": false, 00:21:59.182 "rdma_srq_size": 0, 00:21:59.182 "io_path_stat": false, 00:21:59.182 "allow_accel_sequence": false, 00:21:59.182 "rdma_max_cq_size": 0, 00:21:59.182 "rdma_cm_event_timeout_ms": 0, 00:21:59.182 "dhchap_digests": [ 00:21:59.182 "sha256", 00:21:59.182 "sha384", 00:21:59.182 "sha512" 00:21:59.182 ], 00:21:59.182 "dhchap_dhgroups": [ 00:21:59.182 "null", 00:21:59.182 "ffdhe2048", 00:21:59.182 "ffdhe3072", 00:21:59.182 "ffdhe4096", 00:21:59.182 "ffdhe6144", 00:21:59.182 "ffdhe8192" 00:21:59.182 ] 00:21:59.182 } 00:21:59.182 }, 00:21:59.182 { 00:21:59.182 "method": "bdev_nvme_attach_controller", 00:21:59.182 "params": { 00:21:59.182 "name": "nvme0", 00:21:59.182 "trtype": "TCP", 00:21:59.182 "adrfam": "IPv4", 00:21:59.182 "traddr": "10.0.0.2", 00:21:59.182 "trsvcid": "4420", 00:21:59.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.182 "prchk_reftag": false, 00:21:59.182 "prchk_guard": false, 00:21:59.182 "ctrlr_loss_timeout_sec": 0, 00:21:59.182 "reconnect_delay_sec": 0, 00:21:59.182 "fast_io_fail_timeout_sec": 0, 00:21:59.182 "psk": "key0", 00:21:59.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.182 "hdgst": false, 00:21:59.182 "ddgst": false, 00:21:59.182 "multipath": "multipath" 00:21:59.182 } 00:21:59.182 }, 00:21:59.182 { 00:21:59.182 "method": "bdev_nvme_set_hotplug", 00:21:59.182 "params": { 00:21:59.182 "period_us": 100000, 00:21:59.182 "enable": false 00:21:59.182 } 00:21:59.182 }, 00:21:59.182 { 00:21:59.182 "method": "bdev_enable_histogram", 00:21:59.182 "params": { 00:21:59.182 "name": "nvme0n1", 00:21:59.182 "enable": true 00:21:59.182 } 00:21:59.182 }, 00:21:59.182 { 00:21:59.182 "method": "bdev_wait_for_examine" 00:21:59.182 } 00:21:59.182 ] 00:21:59.182 }, 00:21:59.182 { 00:21:59.182 "subsystem": "nbd", 00:21:59.182 "config": [] 00:21:59.182 } 00:21:59.182 ] 00:21:59.182 }' 00:21:59.182 [2024-12-05 21:15:00.442263] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:21:59.182 [2024-12-05 21:15:00.442321] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125467 ] 00:21:59.182 [2024-12-05 21:15:00.532353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.182 [2024-12-05 21:15:00.562823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.442 [2024-12-05 21:15:00.699191] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:00.117 21:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.117 21:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:00.117 21:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.117 21:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:00.117 21:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.117 21:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:00.117 Running I/O for 1 seconds... 00:22:01.137 4800.00 IOPS, 18.75 MiB/s 00:22:01.137 Latency(us) 00:22:01.137 [2024-12-05T20:15:02.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.137 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:01.137 Verification LBA range: start 0x0 length 0x2000 00:22:01.137 nvme0n1 : 1.02 4852.12 18.95 0.00 0.00 26198.69 6389.76 44346.03 00:22:01.137 [2024-12-05T20:15:02.574Z] =================================================================================================================== 00:22:01.137 [2024-12-05T20:15:02.574Z] Total : 4852.12 18.95 0.00 0.00 26198.69 6389.76 44346.03 00:22:01.137 { 00:22:01.137 "results": [ 00:22:01.137 { 00:22:01.137 "job": "nvme0n1", 00:22:01.137 "core_mask": "0x2", 00:22:01.137 "workload": "verify", 00:22:01.137 "status": "finished", 00:22:01.137 "verify_range": { 00:22:01.137 "start": 0, 00:22:01.137 "length": 8192 00:22:01.137 }, 00:22:01.137 "queue_depth": 128, 00:22:01.137 "io_size": 4096, 00:22:01.137 "runtime": 1.015638, 00:22:01.137 "iops": 4852.12250821651, 00:22:01.137 "mibps": 18.953603547720743, 00:22:01.137 "io_failed": 0, 00:22:01.137 "io_timeout": 0, 00:22:01.137 "avg_latency_us": 26198.68536796537, 00:22:01.137 "min_latency_us": 6389.76, 00:22:01.137 "max_latency_us": 44346.026666666665 00:22:01.137 } 00:22:01.137 ], 00:22:01.137 "core_count": 1 00:22:01.137 } 00:22:01.137 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:01.137 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:01.137 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:01.137 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:22:01.137 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:22:01.137 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:01.138 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:01.138 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:01.138 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:01.138 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:01.138 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:01.138 nvmf_trace.0 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2125467 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2125467 ']' 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2125467 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2125467 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2125467' 00:22:01.399 killing process with pid 2125467 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2125467 00:22:01.399 Received shutdown signal, test time was about 1.000000 seconds 00:22:01.399 00:22:01.399 Latency(us) 00:22:01.399 [2024-12-05T20:15:02.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.399 [2024-12-05T20:15:02.836Z] =================================================================================================================== 00:22:01.399 [2024-12-05T20:15:02.836Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2125467 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:01.399 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:01.399 rmmod nvme_tcp 00:22:01.399 rmmod nvme_fabrics 00:22:01.659 rmmod nvme_keyring 00:22:01.659 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:01.659 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:01.659 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:01.659 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2125200 ']' 00:22:01.659 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2125200 00:22:01.659 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2125200 ']' 00:22:01.659 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2125200 00:22:01.659 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:01.659 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.659 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2125200 00:22:01.659 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:01.659 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:01.659 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2125200' 00:22:01.659 killing process with pid 2125200 00:22:01.659 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2125200 00:22:01.659 21:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2125200 00:22:01.659 21:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:01.659 21:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:01.659 21:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:01.659 21:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:01.659 21:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:22:01.659 21:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:01.659 21:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:22:01.659 21:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:01.659 21:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:01.659 21:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.659 21:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.659 21:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.c7gGlJaiKs /tmp/tmp.YQsHVKBdVA /tmp/tmp.P9dZDKvZGJ 00:22:04.200 00:22:04.200 real 1m22.193s 00:22:04.200 user 2m5.514s 00:22:04.200 sys 0m27.891s 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.200 ************************************ 00:22:04.200 END TEST nvmf_tls 00:22:04.200 ************************************ 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:04.200 ************************************ 00:22:04.200 START TEST nvmf_fips 00:22:04.200 ************************************ 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:04.200 * Looking for test storage... 00:22:04.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:04.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.200 --rc genhtml_branch_coverage=1 00:22:04.200 --rc genhtml_function_coverage=1 00:22:04.200 --rc genhtml_legend=1 00:22:04.200 --rc geninfo_all_blocks=1 00:22:04.200 --rc geninfo_unexecuted_blocks=1 00:22:04.200 00:22:04.200 ' 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:04.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.200 --rc genhtml_branch_coverage=1 00:22:04.200 --rc genhtml_function_coverage=1 00:22:04.200 --rc genhtml_legend=1 00:22:04.200 --rc geninfo_all_blocks=1 00:22:04.200 --rc geninfo_unexecuted_blocks=1 00:22:04.200 00:22:04.200 ' 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:04.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.200 --rc genhtml_branch_coverage=1 00:22:04.200 --rc genhtml_function_coverage=1 00:22:04.200 --rc genhtml_legend=1 00:22:04.200 --rc geninfo_all_blocks=1 00:22:04.200 --rc geninfo_unexecuted_blocks=1 00:22:04.200 00:22:04.200 ' 00:22:04.200 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:04.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.200 --rc genhtml_branch_coverage=1 00:22:04.200 --rc genhtml_function_coverage=1 00:22:04.200 --rc genhtml_legend=1 00:22:04.200 --rc geninfo_all_blocks=1 00:22:04.201 --rc geninfo_unexecuted_blocks=1 00:22:04.201 00:22:04.201 ' 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:04.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:22:04.201 Error setting digest 00:22:04.201 40A21172F57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:04.201 40A21172F57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:04.201 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:04.463 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:04.463 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:04.463 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:04.463 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.463 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:04.463 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:04.463 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:04.463 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.463 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.463 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.463 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:04.463 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:04.463 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:22:04.463 21:15:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:12.609 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:12.609 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:12.609 Found net devices under 0000:31:00.0: cvl_0_0 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:12.609 Found net devices under 0000:31:00.1: cvl_0_1 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.609 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:12.610 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:12.610 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.610 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.610 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:12.610 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:12.610 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.610 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.610 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.610 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.610 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:12.610 21:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:12.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:22:12.870 00:22:12.870 --- 10.0.0.2 ping statistics --- 00:22:12.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.870 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:22:12.870 00:22:12.870 --- 10.0.0.1 ping statistics --- 00:22:12.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.870 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2131193 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2131193 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2131193 ']' 00:22:12.870 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.871 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.871 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.871 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.871 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:12.871 [2024-12-05 21:15:14.203650] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:22:12.871 [2024-12-05 21:15:14.203729] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.131 [2024-12-05 21:15:14.311995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.131 [2024-12-05 21:15:14.361976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.131 [2024-12-05 21:15:14.362029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.131 [2024-12-05 21:15:14.362038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.131 [2024-12-05 21:15:14.362046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.131 [2024-12-05 21:15:14.362052] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.131 [2024-12-05 21:15:14.362899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.702 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.702 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:13.702 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:13.702 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:13.702 21:15:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:13.702 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.702 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:13.702 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:13.702 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:13.703 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Nyf 00:22:13.703 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:13.703 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Nyf 00:22:13.703 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Nyf 00:22:13.703 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Nyf 00:22:13.703 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:13.963 [2024-12-05 21:15:15.209044] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.963 [2024-12-05 21:15:15.225041] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:13.963 [2024-12-05 21:15:15.225342] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.963 malloc0 00:22:13.963 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.963 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2131523 00:22:13.963 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2131523 /var/tmp/bdevperf.sock 00:22:13.963 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:13.963 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2131523 ']' 00:22:13.963 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.963 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.963 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.963 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.963 21:15:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:13.963 [2024-12-05 21:15:15.377738] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:22:13.963 [2024-12-05 21:15:15.377813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2131523 ] 00:22:14.223 [2024-12-05 21:15:15.447533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.223 [2024-12-05 21:15:15.483469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.793 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.793 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:14.793 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Nyf 00:22:15.054 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:15.054 [2024-12-05 21:15:16.467052] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:15.314 TLSTESTn1 00:22:15.314 21:15:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:15.314 Running I/O for 10 seconds... 00:22:17.639 4523.00 IOPS, 17.67 MiB/s [2024-12-05T20:15:20.021Z] 3980.00 IOPS, 15.55 MiB/s [2024-12-05T20:15:20.966Z] 4270.00 IOPS, 16.68 MiB/s [2024-12-05T20:15:21.910Z] 4263.50 IOPS, 16.65 MiB/s [2024-12-05T20:15:22.848Z] 4433.00 IOPS, 17.32 MiB/s [2024-12-05T20:15:23.791Z] 4459.50 IOPS, 17.42 MiB/s [2024-12-05T20:15:24.731Z] 4471.57 IOPS, 17.47 MiB/s [2024-12-05T20:15:26.111Z] 4524.75 IOPS, 17.67 MiB/s [2024-12-05T20:15:26.684Z] 4569.00 IOPS, 17.85 MiB/s [2024-12-05T20:15:26.945Z] 4603.10 IOPS, 17.98 MiB/s 00:22:25.508 Latency(us) 00:22:25.508 [2024-12-05T20:15:26.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.508 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:25.508 Verification LBA range: start 0x0 length 0x2000 00:22:25.508 TLSTESTn1 : 10.03 4604.28 17.99 0.00 0.00 27754.31 4860.59 67720.53 00:22:25.508 [2024-12-05T20:15:26.945Z] =================================================================================================================== 00:22:25.508 [2024-12-05T20:15:26.945Z] Total : 4604.28 17.99 0.00 0.00 27754.31 4860.59 67720.53 00:22:25.508 { 00:22:25.508 "results": [ 00:22:25.508 { 00:22:25.508 "job": "TLSTESTn1", 00:22:25.508 "core_mask": "0x4", 00:22:25.508 "workload": "verify", 00:22:25.508 "status": "finished", 00:22:25.508 "verify_range": { 00:22:25.508 "start": 0, 00:22:25.508 "length": 8192 00:22:25.508 }, 00:22:25.508 "queue_depth": 128, 00:22:25.508 "io_size": 4096, 00:22:25.508 "runtime": 10.025019, 00:22:25.508 "iops": 4604.280550490727, 00:22:25.508 "mibps": 17.985470900354404, 00:22:25.508 "io_failed": 0, 00:22:25.508 "io_timeout": 0, 00:22:25.508 "avg_latency_us": 27754.309826248973, 00:22:25.508 "min_latency_us": 4860.586666666667, 00:22:25.508 "max_latency_us": 67720.53333333334 00:22:25.508 } 00:22:25.508 ], 00:22:25.509 "core_count": 1 00:22:25.509 } 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:25.509 nvmf_trace.0 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2131523 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2131523 ']' 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2131523 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2131523 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2131523' 00:22:25.509 killing process with pid 2131523 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2131523 00:22:25.509 Received shutdown signal, test time was about 10.000000 seconds 00:22:25.509 00:22:25.509 Latency(us) 00:22:25.509 [2024-12-05T20:15:26.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.509 [2024-12-05T20:15:26.946Z] =================================================================================================================== 00:22:25.509 [2024-12-05T20:15:26.946Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:25.509 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2131523 00:22:25.770 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:25.770 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:25.770 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:25.770 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:25.770 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:25.770 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:25.770 21:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:25.770 rmmod nvme_tcp 00:22:25.770 rmmod nvme_fabrics 00:22:25.770 rmmod nvme_keyring 00:22:25.770 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:25.770 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:25.770 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:25.770 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2131193 ']' 00:22:25.770 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2131193 00:22:25.770 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2131193 ']' 00:22:25.770 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2131193 00:22:25.770 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:25.770 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.770 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2131193 00:22:25.770 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:25.770 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:25.771 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2131193' 00:22:25.771 killing process with pid 2131193 00:22:25.771 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2131193 00:22:25.771 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2131193 00:22:25.771 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:25.771 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:25.771 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:25.771 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:25.771 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:22:25.771 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:25.771 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:22:25.771 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:25.771 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:25.771 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.031 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.031 21:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.946 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:27.946 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Nyf 00:22:27.946 00:22:27.946 real 0m24.055s 00:22:27.946 user 0m24.491s 00:22:27.946 sys 0m10.720s 00:22:27.946 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:27.946 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:27.946 ************************************ 00:22:27.946 END TEST nvmf_fips 00:22:27.946 ************************************ 00:22:27.946 21:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:27.946 21:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:27.946 21:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:27.946 21:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:27.946 ************************************ 00:22:27.946 START TEST nvmf_control_msg_list 00:22:27.946 ************************************ 00:22:27.946 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:28.207 * Looking for test storage... 00:22:28.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:28.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.207 --rc genhtml_branch_coverage=1 00:22:28.207 --rc genhtml_function_coverage=1 00:22:28.207 --rc genhtml_legend=1 00:22:28.207 --rc geninfo_all_blocks=1 00:22:28.207 --rc geninfo_unexecuted_blocks=1 00:22:28.207 00:22:28.207 ' 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:28.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.207 --rc genhtml_branch_coverage=1 00:22:28.207 --rc genhtml_function_coverage=1 00:22:28.207 --rc genhtml_legend=1 00:22:28.207 --rc geninfo_all_blocks=1 00:22:28.207 --rc geninfo_unexecuted_blocks=1 00:22:28.207 00:22:28.207 ' 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:28.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.207 --rc genhtml_branch_coverage=1 00:22:28.207 --rc genhtml_function_coverage=1 00:22:28.207 --rc genhtml_legend=1 00:22:28.207 --rc geninfo_all_blocks=1 00:22:28.207 --rc geninfo_unexecuted_blocks=1 00:22:28.207 00:22:28.207 ' 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:28.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.207 --rc genhtml_branch_coverage=1 00:22:28.207 --rc genhtml_function_coverage=1 00:22:28.207 --rc genhtml_legend=1 00:22:28.207 --rc geninfo_all_blocks=1 00:22:28.207 --rc geninfo_unexecuted_blocks=1 00:22:28.207 00:22:28.207 ' 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.207 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:28.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:28.208 21:15:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:36.351 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:36.351 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:36.351 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:36.352 Found net devices under 0000:31:00.0: cvl_0_0 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:36.352 Found net devices under 0000:31:00.1: cvl_0_1 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:36.352 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:36.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:22:36.613 00:22:36.613 --- 10.0.0.2 ping statistics --- 00:22:36.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.613 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:36.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:22:36.613 00:22:36.613 --- 10.0.0.1 ping statistics --- 00:22:36.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.613 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2138553 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2138553 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2138553 ']' 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.613 21:15:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:36.613 [2024-12-05 21:15:38.030666] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:22:36.613 [2024-12-05 21:15:38.030716] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.874 [2024-12-05 21:15:38.116382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.874 [2024-12-05 21:15:38.150680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.874 [2024-12-05 21:15:38.150715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.874 [2024-12-05 21:15:38.150722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.874 [2024-12-05 21:15:38.150729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.874 [2024-12-05 21:15:38.150736] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.874 [2024-12-05 21:15:38.151345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:36.874 [2024-12-05 21:15:38.279253] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:36.874 Malloc0 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.874 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:37.136 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.136 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:37.136 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.136 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:37.136 [2024-12-05 21:15:38.314104] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.136 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.136 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2138579 00:22:37.136 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2138580 00:22:37.136 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2138581 00:22:37.136 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2138579 00:22:37.136 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:37.136 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:37.136 21:15:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:37.136 [2024-12-05 21:15:38.392572] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:37.136 [2024-12-05 21:15:38.412667] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:37.136 [2024-12-05 21:15:38.412946] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:38.589 Initializing NVMe Controllers 00:22:38.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:38.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:38.589 Initialization complete. Launching workers. 00:22:38.589 ======================================================== 00:22:38.589 Latency(us) 00:22:38.589 Device Information : IOPS MiB/s Average min max 00:22:38.589 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1528.00 5.97 654.49 300.23 846.58 00:22:38.589 ======================================================== 00:22:38.589 Total : 1528.00 5.97 654.49 300.23 846.58 00:22:38.589 00:22:38.589 [2024-12-05 21:15:39.516843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf915a0 is same with the state(6) to be set 00:22:38.589 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2138580 00:22:38.589 Initializing NVMe Controllers 00:22:38.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:38.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:38.589 Initialization complete. Launching workers. 00:22:38.589 ======================================================== 00:22:38.589 Latency(us) 00:22:38.589 Device Information : IOPS MiB/s Average min max 00:22:38.589 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2751.00 10.75 363.30 142.77 614.29 00:22:38.589 ======================================================== 00:22:38.589 Total : 2751.00 10.75 363.30 142.77 614.29 00:22:38.589 00:22:38.589 Initializing NVMe Controllers 00:22:38.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:38.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:38.589 Initialization complete. Launching workers. 00:22:38.589 ======================================================== 00:22:38.589 Latency(us) 00:22:38.589 Device Information : IOPS MiB/s Average min max 00:22:38.589 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41457.71 40712.04 42089.99 00:22:38.589 ======================================================== 00:22:38.589 Total : 25.00 0.10 41457.71 40712.04 42089.99 00:22:38.589 00:22:38.589 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2138581 00:22:38.589 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:38.589 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:38.589 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:38.589 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:38.589 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:38.589 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:38.589 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:38.589 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:38.589 rmmod nvme_tcp 00:22:38.590 rmmod nvme_fabrics 00:22:38.590 rmmod nvme_keyring 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2138553 ']' 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2138553 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2138553 ']' 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2138553 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2138553 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2138553' 00:22:38.590 killing process with pid 2138553 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2138553 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2138553 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.590 21:15:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.150 21:15:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.150 00:22:41.150 real 0m12.629s 00:22:41.150 user 0m7.521s 00:22:41.150 sys 0m7.177s 00:22:41.150 21:15:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:41.150 21:15:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:41.150 ************************************ 00:22:41.150 END TEST nvmf_control_msg_list 00:22:41.150 ************************************ 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:41.150 ************************************ 00:22:41.150 START TEST nvmf_wait_for_buf 00:22:41.150 ************************************ 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:41.150 * Looking for test storage... 00:22:41.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:41.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.150 --rc genhtml_branch_coverage=1 00:22:41.150 --rc genhtml_function_coverage=1 00:22:41.150 --rc genhtml_legend=1 00:22:41.150 --rc geninfo_all_blocks=1 00:22:41.150 --rc geninfo_unexecuted_blocks=1 00:22:41.150 00:22:41.150 ' 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:41.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.150 --rc genhtml_branch_coverage=1 00:22:41.150 --rc genhtml_function_coverage=1 00:22:41.150 --rc genhtml_legend=1 00:22:41.150 --rc geninfo_all_blocks=1 00:22:41.150 --rc geninfo_unexecuted_blocks=1 00:22:41.150 00:22:41.150 ' 00:22:41.150 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:41.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.150 --rc genhtml_branch_coverage=1 00:22:41.150 --rc genhtml_function_coverage=1 00:22:41.150 --rc genhtml_legend=1 00:22:41.150 --rc geninfo_all_blocks=1 00:22:41.150 --rc geninfo_unexecuted_blocks=1 00:22:41.151 00:22:41.151 ' 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:41.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.151 --rc genhtml_branch_coverage=1 00:22:41.151 --rc genhtml_function_coverage=1 00:22:41.151 --rc genhtml_legend=1 00:22:41.151 --rc geninfo_all_blocks=1 00:22:41.151 --rc geninfo_unexecuted_blocks=1 00:22:41.151 00:22:41.151 ' 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:41.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.151 21:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.300 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:49.301 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:49.301 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:49.301 Found net devices under 0000:31:00.0: cvl_0_0 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:49.301 Found net devices under 0000:31:00.1: cvl_0_1 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.719 ms 00:22:49.301 00:22:49.301 --- 10.0.0.2 ping statistics --- 00:22:49.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.301 rtt min/avg/max/mdev = 0.719/0.719/0.719/0.000 ms 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:22:49.301 00:22:49.301 --- 10.0.0.1 ping statistics --- 00:22:49.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.301 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2143603 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2143603 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2143603 ']' 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.301 21:15:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:49.562 [2024-12-05 21:15:50.774652] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:22:49.562 [2024-12-05 21:15:50.774722] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.562 [2024-12-05 21:15:50.863230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.562 [2024-12-05 21:15:50.903205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.562 [2024-12-05 21:15:50.903238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.562 [2024-12-05 21:15:50.903246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.562 [2024-12-05 21:15:50.903253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.562 [2024-12-05 21:15:50.903259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.562 [2024-12-05 21:15:50.903856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:50.507 Malloc0 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:50.507 [2024-12-05 21:15:51.716254] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:50.507 [2024-12-05 21:15:51.752485] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.507 21:15:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:50.507 [2024-12-05 21:15:51.853952] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:51.894 Initializing NVMe Controllers 00:22:51.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:51.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:51.894 Initialization complete. Launching workers. 00:22:51.894 ======================================================== 00:22:51.894 Latency(us) 00:22:51.894 Device Information : IOPS MiB/s Average min max 00:22:51.894 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 165843.09 47871.26 191553.56 00:22:51.894 ======================================================== 00:22:51.894 Total : 25.00 3.12 165843.09 47871.26 191553.56 00:22:51.894 00:22:51.894 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:51.894 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:51.894 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.894 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:51.894 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.894 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:22:51.894 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:22:51.894 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:51.894 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:51.894 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:51.894 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:51.894 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:51.894 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:51.894 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:51.894 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:51.894 rmmod nvme_tcp 00:22:52.155 rmmod nvme_fabrics 00:22:52.155 rmmod nvme_keyring 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2143603 ']' 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2143603 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2143603 ']' 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2143603 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2143603 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2143603' 00:22:52.155 killing process with pid 2143603 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2143603 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2143603 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.155 21:15:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.701 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:54.701 00:22:54.701 real 0m13.600s 00:22:54.701 user 0m5.388s 00:22:54.701 sys 0m6.782s 00:22:54.701 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.701 21:15:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:54.701 ************************************ 00:22:54.701 END TEST nvmf_wait_for_buf 00:22:54.701 ************************************ 00:22:54.701 21:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:54.701 21:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:54.701 21:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:54.701 21:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:54.701 21:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:54.701 21:15:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:02.837 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:02.837 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:02.837 Found net devices under 0000:31:00.0: cvl_0_0 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:02.837 Found net devices under 0000:31:00.1: cvl_0_1 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:02.837 ************************************ 00:23:02.837 START TEST nvmf_perf_adq 00:23:02.837 ************************************ 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:02.837 * Looking for test storage... 00:23:02.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:02.837 21:16:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:23:02.837 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:23:02.837 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:02.837 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.837 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:23:02.837 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:23:02.837 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.837 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:23:02.837 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:23:02.837 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:23:02.837 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:23:02.837 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.837 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:23:02.837 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:23:02.837 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:02.837 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:02.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.838 --rc genhtml_branch_coverage=1 00:23:02.838 --rc genhtml_function_coverage=1 00:23:02.838 --rc genhtml_legend=1 00:23:02.838 --rc geninfo_all_blocks=1 00:23:02.838 --rc geninfo_unexecuted_blocks=1 00:23:02.838 00:23:02.838 ' 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:02.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.838 --rc genhtml_branch_coverage=1 00:23:02.838 --rc genhtml_function_coverage=1 00:23:02.838 --rc genhtml_legend=1 00:23:02.838 --rc geninfo_all_blocks=1 00:23:02.838 --rc geninfo_unexecuted_blocks=1 00:23:02.838 00:23:02.838 ' 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:02.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.838 --rc genhtml_branch_coverage=1 00:23:02.838 --rc genhtml_function_coverage=1 00:23:02.838 --rc genhtml_legend=1 00:23:02.838 --rc geninfo_all_blocks=1 00:23:02.838 --rc geninfo_unexecuted_blocks=1 00:23:02.838 00:23:02.838 ' 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:02.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.838 --rc genhtml_branch_coverage=1 00:23:02.838 --rc genhtml_function_coverage=1 00:23:02.838 --rc genhtml_legend=1 00:23:02.838 --rc geninfo_all_blocks=1 00:23:02.838 --rc geninfo_unexecuted_blocks=1 00:23:02.838 00:23:02.838 ' 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:02.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:02.838 21:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:10.979 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:10.979 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:10.979 Found net devices under 0000:31:00.0: cvl_0_0 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:10.979 Found net devices under 0000:31:00.1: cvl_0_1 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:10.979 21:16:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:12.366 21:16:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:14.277 21:16:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.561 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:19.562 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:19.562 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:19.562 Found net devices under 0000:31:00.0: cvl_0_0 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:19.562 Found net devices under 0000:31:00.1: cvl_0_1 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:19.562 21:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:19.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:23:19.824 00:23:19.824 --- 10.0.0.2 ping statistics --- 00:23:19.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.824 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:19.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:23:19.824 00:23:19.824 --- 10.0.0.1 ping statistics --- 00:23:19.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.824 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2154874 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2154874 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2154874 ']' 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.824 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.824 [2024-12-05 21:16:21.155669] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:23:19.824 [2024-12-05 21:16:21.155737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.824 [2024-12-05 21:16:21.246147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:20.085 [2024-12-05 21:16:21.288468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.085 [2024-12-05 21:16:21.288506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.085 [2024-12-05 21:16:21.288514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.085 [2024-12-05 21:16:21.288521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.085 [2024-12-05 21:16:21.288527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.085 [2024-12-05 21:16:21.290158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.085 [2024-12-05 21:16:21.290276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.085 [2024-12-05 21:16:21.290433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.085 [2024-12-05 21:16:21.290434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.656 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.656 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:20.656 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:20.656 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.656 21:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:20.656 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.656 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:23:20.656 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:20.656 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:20.656 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.656 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:20.656 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.656 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:20.656 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:20.656 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.656 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:20.657 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.657 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:20.657 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.657 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:20.918 [2024-12-05 21:16:22.141531] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:20.918 Malloc1 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:20.918 [2024-12-05 21:16:22.208277] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2155225 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:23:20.918 21:16:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:22.847 21:16:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:23:22.847 21:16:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.847 21:16:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.847 21:16:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.847 21:16:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:23:22.847 "tick_rate": 2400000000, 00:23:22.847 "poll_groups": [ 00:23:22.847 { 00:23:22.847 "name": "nvmf_tgt_poll_group_000", 00:23:22.847 "admin_qpairs": 1, 00:23:22.847 "io_qpairs": 1, 00:23:22.847 "current_admin_qpairs": 1, 00:23:22.847 "current_io_qpairs": 1, 00:23:22.847 "pending_bdev_io": 0, 00:23:22.847 "completed_nvme_io": 20289, 00:23:22.847 "transports": [ 00:23:22.847 { 00:23:22.847 "trtype": "TCP" 00:23:22.847 } 00:23:22.847 ] 00:23:22.847 }, 00:23:22.847 { 00:23:22.847 "name": "nvmf_tgt_poll_group_001", 00:23:22.847 "admin_qpairs": 0, 00:23:22.847 "io_qpairs": 1, 00:23:22.847 "current_admin_qpairs": 0, 00:23:22.847 "current_io_qpairs": 1, 00:23:22.847 "pending_bdev_io": 0, 00:23:22.847 "completed_nvme_io": 29422, 00:23:22.847 "transports": [ 00:23:22.847 { 00:23:22.847 "trtype": "TCP" 00:23:22.847 } 00:23:22.847 ] 00:23:22.847 }, 00:23:22.847 { 00:23:22.847 "name": "nvmf_tgt_poll_group_002", 00:23:22.847 "admin_qpairs": 0, 00:23:22.847 "io_qpairs": 1, 00:23:22.847 "current_admin_qpairs": 0, 00:23:22.847 "current_io_qpairs": 1, 00:23:22.847 "pending_bdev_io": 0, 00:23:22.847 "completed_nvme_io": 21684, 00:23:22.847 "transports": [ 00:23:22.847 { 00:23:22.847 "trtype": "TCP" 00:23:22.847 } 00:23:22.847 ] 00:23:22.847 }, 00:23:22.847 { 00:23:22.847 "name": "nvmf_tgt_poll_group_003", 00:23:22.847 "admin_qpairs": 0, 00:23:22.847 "io_qpairs": 1, 00:23:22.847 "current_admin_qpairs": 0, 00:23:22.847 "current_io_qpairs": 1, 00:23:22.847 "pending_bdev_io": 0, 00:23:22.847 "completed_nvme_io": 20453, 00:23:22.847 "transports": [ 00:23:22.847 { 00:23:22.847 "trtype": "TCP" 00:23:22.847 } 00:23:22.847 ] 00:23:22.847 } 00:23:22.847 ] 00:23:22.847 }' 00:23:22.847 21:16:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:22.847 21:16:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:23:23.112 21:16:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:23:23.112 21:16:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:23:23.112 21:16:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2155225 00:23:31.260 Initializing NVMe Controllers 00:23:31.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:31.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:31.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:31.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:31.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:31.260 Initialization complete. Launching workers. 00:23:31.260 ======================================================== 00:23:31.260 Latency(us) 00:23:31.260 Device Information : IOPS MiB/s Average min max 00:23:31.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11150.90 43.56 5740.09 1694.46 10322.43 00:23:31.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15155.50 59.20 4222.41 1441.58 7639.30 00:23:31.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13631.80 53.25 4694.85 1266.49 10635.21 00:23:31.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13848.80 54.10 4620.61 1686.48 10244.52 00:23:31.261 ======================================================== 00:23:31.261 Total : 53787.00 210.11 4759.31 1266.49 10635.21 00:23:31.261 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:31.261 rmmod nvme_tcp 00:23:31.261 rmmod nvme_fabrics 00:23:31.261 rmmod nvme_keyring 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2154874 ']' 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2154874 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2154874 ']' 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2154874 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2154874 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2154874' 00:23:31.261 killing process with pid 2154874 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2154874 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2154874 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.261 21:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.803 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.803 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:33.803 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:33.803 21:16:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:35.185 21:16:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:37.098 21:16:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:42.389 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:42.389 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:42.389 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.389 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:42.390 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:42.390 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:42.390 Found net devices under 0000:31:00.0: cvl_0_0 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:42.390 Found net devices under 0000:31:00.1: cvl_0_1 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:42.390 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:42.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:23:42.391 00:23:42.391 --- 10.0.0.2 ping statistics --- 00:23:42.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.391 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:23:42.391 00:23:42.391 --- 10.0.0.1 ping statistics --- 00:23:42.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.391 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:42.391 net.core.busy_poll = 1 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:42.391 net.core.busy_read = 1 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:42.391 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:42.651 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:42.651 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:42.651 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:42.651 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:42.651 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:42.651 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:42.651 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:42.651 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2159708 00:23:42.651 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2159708 00:23:42.651 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2159708 ']' 00:23:42.651 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.651 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.651 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.651 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.651 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:42.652 21:16:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:42.652 [2024-12-05 21:16:43.993578] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:23:42.652 [2024-12-05 21:16:43.993648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.652 [2024-12-05 21:16:44.084029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:42.911 [2024-12-05 21:16:44.125803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.911 [2024-12-05 21:16:44.125841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.911 [2024-12-05 21:16:44.125849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.911 [2024-12-05 21:16:44.125856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.911 [2024-12-05 21:16:44.125867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.911 [2024-12-05 21:16:44.127464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.911 [2024-12-05 21:16:44.127581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.911 [2024-12-05 21:16:44.127740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.911 [2024-12-05 21:16:44.127741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.480 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.739 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.739 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:43.739 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.739 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.739 [2024-12-05 21:16:44.970574] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.739 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.739 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:43.739 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.739 21:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.739 Malloc1 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.739 [2024-12-05 21:16:45.040341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2160058 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:43.739 21:16:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:45.643 21:16:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:45.643 21:16:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.643 21:16:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:45.643 21:16:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.643 21:16:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:45.643 "tick_rate": 2400000000, 00:23:45.643 "poll_groups": [ 00:23:45.643 { 00:23:45.643 "name": "nvmf_tgt_poll_group_000", 00:23:45.643 "admin_qpairs": 1, 00:23:45.643 "io_qpairs": 2, 00:23:45.643 "current_admin_qpairs": 1, 00:23:45.643 "current_io_qpairs": 2, 00:23:45.643 "pending_bdev_io": 0, 00:23:45.643 "completed_nvme_io": 27458, 00:23:45.643 "transports": [ 00:23:45.643 { 00:23:45.643 "trtype": "TCP" 00:23:45.643 } 00:23:45.643 ] 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "name": "nvmf_tgt_poll_group_001", 00:23:45.643 "admin_qpairs": 0, 00:23:45.643 "io_qpairs": 2, 00:23:45.643 "current_admin_qpairs": 0, 00:23:45.643 "current_io_qpairs": 2, 00:23:45.643 "pending_bdev_io": 0, 00:23:45.643 "completed_nvme_io": 38353, 00:23:45.643 "transports": [ 00:23:45.643 { 00:23:45.643 "trtype": "TCP" 00:23:45.643 } 00:23:45.643 ] 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "name": "nvmf_tgt_poll_group_002", 00:23:45.643 "admin_qpairs": 0, 00:23:45.643 "io_qpairs": 0, 00:23:45.643 "current_admin_qpairs": 0, 00:23:45.643 "current_io_qpairs": 0, 00:23:45.643 "pending_bdev_io": 0, 00:23:45.643 "completed_nvme_io": 0, 00:23:45.643 "transports": [ 00:23:45.643 { 00:23:45.643 "trtype": "TCP" 00:23:45.643 } 00:23:45.643 ] 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "name": "nvmf_tgt_poll_group_003", 00:23:45.643 "admin_qpairs": 0, 00:23:45.643 "io_qpairs": 0, 00:23:45.643 "current_admin_qpairs": 0, 00:23:45.643 "current_io_qpairs": 0, 00:23:45.643 "pending_bdev_io": 0, 00:23:45.643 "completed_nvme_io": 0, 00:23:45.643 "transports": [ 00:23:45.643 { 00:23:45.643 "trtype": "TCP" 00:23:45.643 } 00:23:45.643 ] 00:23:45.643 } 00:23:45.643 ] 00:23:45.643 }' 00:23:45.643 21:16:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:45.643 21:16:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:45.902 21:16:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:23:45.902 21:16:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:23:45.902 21:16:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2160058 00:23:54.032 Initializing NVMe Controllers 00:23:54.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:54.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:54.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:54.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:54.032 Initialization complete. Launching workers. 00:23:54.032 ======================================================== 00:23:54.032 Latency(us) 00:23:54.032 Device Information : IOPS MiB/s Average min max 00:23:54.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7471.10 29.18 8583.61 1164.94 52129.93 00:23:54.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9439.10 36.87 6780.44 1179.43 53600.98 00:23:54.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8842.60 34.54 7238.87 1165.77 49635.05 00:23:54.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12213.70 47.71 5239.58 1046.03 50332.58 00:23:54.032 ======================================================== 00:23:54.032 Total : 37966.50 148.31 6746.35 1046.03 53600.98 00:23:54.032 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:54.032 rmmod nvme_tcp 00:23:54.032 rmmod nvme_fabrics 00:23:54.032 rmmod nvme_keyring 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2159708 ']' 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2159708 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2159708 ']' 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2159708 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2159708 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2159708' 00:23:54.032 killing process with pid 2159708 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2159708 00:23:54.032 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2159708 00:23:54.292 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:54.292 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:54.292 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:54.292 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:54.292 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:54.292 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:54.292 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:54.292 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:54.292 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:54.292 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.292 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.292 21:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:57.585 00:23:57.585 real 0m54.834s 00:23:57.585 user 2m49.692s 00:23:57.585 sys 0m12.159s 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:57.585 ************************************ 00:23:57.585 END TEST nvmf_perf_adq 00:23:57.585 ************************************ 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:57.585 ************************************ 00:23:57.585 START TEST nvmf_shutdown 00:23:57.585 ************************************ 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:57.585 * Looking for test storage... 00:23:57.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:57.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.585 --rc genhtml_branch_coverage=1 00:23:57.585 --rc genhtml_function_coverage=1 00:23:57.585 --rc genhtml_legend=1 00:23:57.585 --rc geninfo_all_blocks=1 00:23:57.585 --rc geninfo_unexecuted_blocks=1 00:23:57.585 00:23:57.585 ' 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:57.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.585 --rc genhtml_branch_coverage=1 00:23:57.585 --rc genhtml_function_coverage=1 00:23:57.585 --rc genhtml_legend=1 00:23:57.585 --rc geninfo_all_blocks=1 00:23:57.585 --rc geninfo_unexecuted_blocks=1 00:23:57.585 00:23:57.585 ' 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:57.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.585 --rc genhtml_branch_coverage=1 00:23:57.585 --rc genhtml_function_coverage=1 00:23:57.585 --rc genhtml_legend=1 00:23:57.585 --rc geninfo_all_blocks=1 00:23:57.585 --rc geninfo_unexecuted_blocks=1 00:23:57.585 00:23:57.585 ' 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:57.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.585 --rc genhtml_branch_coverage=1 00:23:57.585 --rc genhtml_function_coverage=1 00:23:57.585 --rc genhtml_legend=1 00:23:57.585 --rc geninfo_all_blocks=1 00:23:57.585 --rc geninfo_unexecuted_blocks=1 00:23:57.585 00:23:57.585 ' 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.585 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:57.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:57.586 ************************************ 00:23:57.586 START TEST nvmf_shutdown_tc1 00:23:57.586 ************************************ 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.586 21:16:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.586 21:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:57.586 21:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:57.586 21:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:57.586 21:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:07.607 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:07.607 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:07.607 Found net devices under 0000:31:00.0: cvl_0_0 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:07.607 Found net devices under 0000:31:00.1: cvl_0_1 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.607 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:07.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:24:07.608 00:24:07.608 --- 10.0.0.2 ping statistics --- 00:24:07.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.608 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:24:07.608 00:24:07.608 --- 10.0.0.1 ping statistics --- 00:24:07.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.608 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2167206 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2167206 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2167206 ']' 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.608 21:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:07.608 [2024-12-05 21:17:07.741921] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:24:07.608 [2024-12-05 21:17:07.741987] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.608 [2024-12-05 21:17:07.853345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:07.608 [2024-12-05 21:17:07.905027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.608 [2024-12-05 21:17:07.905079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.608 [2024-12-05 21:17:07.905088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.608 [2024-12-05 21:17:07.905096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.608 [2024-12-05 21:17:07.905102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.608 [2024-12-05 21:17:07.907459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.608 [2024-12-05 21:17:07.907626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:07.608 [2024-12-05 21:17:07.907792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.608 [2024-12-05 21:17:07.907792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:07.608 [2024-12-05 21:17:08.597460] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.608 21:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:07.608 Malloc1 00:24:07.608 [2024-12-05 21:17:08.720305] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.608 Malloc2 00:24:07.608 Malloc3 00:24:07.608 Malloc4 00:24:07.608 Malloc5 00:24:07.608 Malloc6 00:24:07.608 Malloc7 00:24:07.608 Malloc8 00:24:07.608 Malloc9 00:24:07.870 Malloc10 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2167446 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2167446 /var/tmp/bdevperf.sock 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2167446 ']' 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:07.870 { 00:24:07.870 "params": { 00:24:07.870 "name": "Nvme$subsystem", 00:24:07.870 "trtype": "$TEST_TRANSPORT", 00:24:07.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.870 "adrfam": "ipv4", 00:24:07.870 "trsvcid": "$NVMF_PORT", 00:24:07.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.870 "hdgst": ${hdgst:-false}, 00:24:07.870 "ddgst": ${ddgst:-false} 00:24:07.870 }, 00:24:07.870 "method": "bdev_nvme_attach_controller" 00:24:07.870 } 00:24:07.870 EOF 00:24:07.870 )") 00:24:07.870 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:07.871 { 00:24:07.871 "params": { 00:24:07.871 "name": "Nvme$subsystem", 00:24:07.871 "trtype": "$TEST_TRANSPORT", 00:24:07.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.871 "adrfam": "ipv4", 00:24:07.871 "trsvcid": "$NVMF_PORT", 00:24:07.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.871 "hdgst": ${hdgst:-false}, 00:24:07.871 "ddgst": ${ddgst:-false} 00:24:07.871 }, 00:24:07.871 "method": "bdev_nvme_attach_controller" 00:24:07.871 } 00:24:07.871 EOF 00:24:07.871 )") 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:07.871 { 00:24:07.871 "params": { 00:24:07.871 "name": "Nvme$subsystem", 00:24:07.871 "trtype": "$TEST_TRANSPORT", 00:24:07.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.871 "adrfam": "ipv4", 00:24:07.871 "trsvcid": "$NVMF_PORT", 00:24:07.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.871 "hdgst": ${hdgst:-false}, 00:24:07.871 "ddgst": ${ddgst:-false} 00:24:07.871 }, 00:24:07.871 "method": "bdev_nvme_attach_controller" 00:24:07.871 } 00:24:07.871 EOF 00:24:07.871 )") 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:07.871 { 00:24:07.871 "params": { 00:24:07.871 "name": "Nvme$subsystem", 00:24:07.871 "trtype": "$TEST_TRANSPORT", 00:24:07.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.871 "adrfam": "ipv4", 00:24:07.871 "trsvcid": "$NVMF_PORT", 00:24:07.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.871 "hdgst": ${hdgst:-false}, 00:24:07.871 "ddgst": ${ddgst:-false} 00:24:07.871 }, 00:24:07.871 "method": "bdev_nvme_attach_controller" 00:24:07.871 } 00:24:07.871 EOF 00:24:07.871 )") 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:07.871 { 00:24:07.871 "params": { 00:24:07.871 "name": "Nvme$subsystem", 00:24:07.871 "trtype": "$TEST_TRANSPORT", 00:24:07.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.871 "adrfam": "ipv4", 00:24:07.871 "trsvcid": "$NVMF_PORT", 00:24:07.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.871 "hdgst": ${hdgst:-false}, 00:24:07.871 "ddgst": ${ddgst:-false} 00:24:07.871 }, 00:24:07.871 "method": "bdev_nvme_attach_controller" 00:24:07.871 } 00:24:07.871 EOF 00:24:07.871 )") 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:07.871 { 00:24:07.871 "params": { 00:24:07.871 "name": "Nvme$subsystem", 00:24:07.871 "trtype": "$TEST_TRANSPORT", 00:24:07.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.871 "adrfam": "ipv4", 00:24:07.871 "trsvcid": "$NVMF_PORT", 00:24:07.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.871 "hdgst": ${hdgst:-false}, 00:24:07.871 "ddgst": ${ddgst:-false} 00:24:07.871 }, 00:24:07.871 "method": "bdev_nvme_attach_controller" 00:24:07.871 } 00:24:07.871 EOF 00:24:07.871 )") 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:07.871 [2024-12-05 21:17:09.181048] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:24:07.871 [2024-12-05 21:17:09.181103] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:07.871 { 00:24:07.871 "params": { 00:24:07.871 "name": "Nvme$subsystem", 00:24:07.871 "trtype": "$TEST_TRANSPORT", 00:24:07.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.871 "adrfam": "ipv4", 00:24:07.871 "trsvcid": "$NVMF_PORT", 00:24:07.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.871 "hdgst": ${hdgst:-false}, 00:24:07.871 "ddgst": ${ddgst:-false} 00:24:07.871 }, 00:24:07.871 "method": "bdev_nvme_attach_controller" 00:24:07.871 } 00:24:07.871 EOF 00:24:07.871 )") 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:07.871 { 00:24:07.871 "params": { 00:24:07.871 "name": "Nvme$subsystem", 00:24:07.871 "trtype": "$TEST_TRANSPORT", 00:24:07.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.871 "adrfam": "ipv4", 00:24:07.871 "trsvcid": "$NVMF_PORT", 00:24:07.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.871 "hdgst": ${hdgst:-false}, 00:24:07.871 "ddgst": ${ddgst:-false} 00:24:07.871 }, 00:24:07.871 "method": "bdev_nvme_attach_controller" 00:24:07.871 } 00:24:07.871 EOF 00:24:07.871 )") 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:07.871 { 00:24:07.871 "params": { 00:24:07.871 "name": "Nvme$subsystem", 00:24:07.871 "trtype": "$TEST_TRANSPORT", 00:24:07.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.871 "adrfam": "ipv4", 00:24:07.871 "trsvcid": "$NVMF_PORT", 00:24:07.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.871 "hdgst": ${hdgst:-false}, 00:24:07.871 "ddgst": ${ddgst:-false} 00:24:07.871 }, 00:24:07.871 "method": "bdev_nvme_attach_controller" 00:24:07.871 } 00:24:07.871 EOF 00:24:07.871 )") 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:07.871 { 00:24:07.871 "params": { 00:24:07.871 "name": "Nvme$subsystem", 00:24:07.871 "trtype": "$TEST_TRANSPORT", 00:24:07.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.871 "adrfam": "ipv4", 00:24:07.871 "trsvcid": "$NVMF_PORT", 00:24:07.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.871 "hdgst": ${hdgst:-false}, 00:24:07.871 "ddgst": ${ddgst:-false} 00:24:07.871 }, 00:24:07.871 "method": "bdev_nvme_attach_controller" 00:24:07.871 } 00:24:07.871 EOF 00:24:07.871 )") 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:07.871 21:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:07.871 "params": { 00:24:07.871 "name": "Nvme1", 00:24:07.871 "trtype": "tcp", 00:24:07.871 "traddr": "10.0.0.2", 00:24:07.871 "adrfam": "ipv4", 00:24:07.871 "trsvcid": "4420", 00:24:07.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.871 "hdgst": false, 00:24:07.871 "ddgst": false 00:24:07.871 }, 00:24:07.871 "method": "bdev_nvme_attach_controller" 00:24:07.871 },{ 00:24:07.871 "params": { 00:24:07.871 "name": "Nvme2", 00:24:07.871 "trtype": "tcp", 00:24:07.871 "traddr": "10.0.0.2", 00:24:07.871 "adrfam": "ipv4", 00:24:07.871 "trsvcid": "4420", 00:24:07.871 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:07.871 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:07.871 "hdgst": false, 00:24:07.871 "ddgst": false 00:24:07.871 }, 00:24:07.871 "method": "bdev_nvme_attach_controller" 00:24:07.871 },{ 00:24:07.871 "params": { 00:24:07.871 "name": "Nvme3", 00:24:07.872 "trtype": "tcp", 00:24:07.872 "traddr": "10.0.0.2", 00:24:07.872 "adrfam": "ipv4", 00:24:07.872 "trsvcid": "4420", 00:24:07.872 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:07.872 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:07.872 "hdgst": false, 00:24:07.872 "ddgst": false 00:24:07.872 }, 00:24:07.872 "method": "bdev_nvme_attach_controller" 00:24:07.872 },{ 00:24:07.872 "params": { 00:24:07.872 "name": "Nvme4", 00:24:07.872 "trtype": "tcp", 00:24:07.872 "traddr": "10.0.0.2", 00:24:07.872 "adrfam": "ipv4", 00:24:07.872 "trsvcid": "4420", 00:24:07.872 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:07.872 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:07.872 "hdgst": false, 00:24:07.872 "ddgst": false 00:24:07.872 }, 00:24:07.872 "method": "bdev_nvme_attach_controller" 00:24:07.872 },{ 00:24:07.872 "params": { 00:24:07.872 "name": "Nvme5", 00:24:07.872 "trtype": "tcp", 00:24:07.872 "traddr": "10.0.0.2", 00:24:07.872 "adrfam": "ipv4", 00:24:07.872 "trsvcid": "4420", 00:24:07.872 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:07.872 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:07.872 "hdgst": false, 00:24:07.872 "ddgst": false 00:24:07.872 }, 00:24:07.872 "method": "bdev_nvme_attach_controller" 00:24:07.872 },{ 00:24:07.872 "params": { 00:24:07.872 "name": "Nvme6", 00:24:07.872 "trtype": "tcp", 00:24:07.872 "traddr": "10.0.0.2", 00:24:07.872 "adrfam": "ipv4", 00:24:07.872 "trsvcid": "4420", 00:24:07.872 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:07.872 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:07.872 "hdgst": false, 00:24:07.872 "ddgst": false 00:24:07.872 }, 00:24:07.872 "method": "bdev_nvme_attach_controller" 00:24:07.872 },{ 00:24:07.872 "params": { 00:24:07.872 "name": "Nvme7", 00:24:07.872 "trtype": "tcp", 00:24:07.872 "traddr": "10.0.0.2", 00:24:07.872 "adrfam": "ipv4", 00:24:07.872 "trsvcid": "4420", 00:24:07.872 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:07.872 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:07.872 "hdgst": false, 00:24:07.872 "ddgst": false 00:24:07.872 }, 00:24:07.872 "method": "bdev_nvme_attach_controller" 00:24:07.872 },{ 00:24:07.872 "params": { 00:24:07.872 "name": "Nvme8", 00:24:07.872 "trtype": "tcp", 00:24:07.872 "traddr": "10.0.0.2", 00:24:07.872 "adrfam": "ipv4", 00:24:07.872 "trsvcid": "4420", 00:24:07.872 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:07.872 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:07.872 "hdgst": false, 00:24:07.872 "ddgst": false 00:24:07.872 }, 00:24:07.872 "method": "bdev_nvme_attach_controller" 00:24:07.872 },{ 00:24:07.872 "params": { 00:24:07.872 "name": "Nvme9", 00:24:07.872 "trtype": "tcp", 00:24:07.872 "traddr": "10.0.0.2", 00:24:07.872 "adrfam": "ipv4", 00:24:07.872 "trsvcid": "4420", 00:24:07.872 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:07.872 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:07.872 "hdgst": false, 00:24:07.872 "ddgst": false 00:24:07.872 }, 00:24:07.872 "method": "bdev_nvme_attach_controller" 00:24:07.872 },{ 00:24:07.872 "params": { 00:24:07.872 "name": "Nvme10", 00:24:07.872 "trtype": "tcp", 00:24:07.872 "traddr": "10.0.0.2", 00:24:07.872 "adrfam": "ipv4", 00:24:07.872 "trsvcid": "4420", 00:24:07.872 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:07.872 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:07.872 "hdgst": false, 00:24:07.872 "ddgst": false 00:24:07.872 }, 00:24:07.872 "method": "bdev_nvme_attach_controller" 00:24:07.872 }' 00:24:07.872 [2024-12-05 21:17:09.261034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.872 [2024-12-05 21:17:09.297486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.253 21:17:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.253 21:17:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:09.253 21:17:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:09.253 21:17:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.253 21:17:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:09.253 21:17:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.253 21:17:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2167446 00:24:09.253 21:17:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:24:09.253 21:17:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:24:10.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2167446 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:10.196 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2167206 00:24:10.196 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:10.196 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:10.196 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:10.196 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:10.196 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.196 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.196 { 00:24:10.196 "params": { 00:24:10.196 "name": "Nvme$subsystem", 00:24:10.196 "trtype": "$TEST_TRANSPORT", 00:24:10.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.196 "adrfam": "ipv4", 00:24:10.196 "trsvcid": "$NVMF_PORT", 00:24:10.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.196 "hdgst": ${hdgst:-false}, 00:24:10.196 "ddgst": ${ddgst:-false} 00:24:10.196 }, 00:24:10.196 "method": "bdev_nvme_attach_controller" 00:24:10.196 } 00:24:10.196 EOF 00:24:10.196 )") 00:24:10.196 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.196 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.196 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.196 { 00:24:10.196 "params": { 00:24:10.196 "name": "Nvme$subsystem", 00:24:10.196 "trtype": "$TEST_TRANSPORT", 00:24:10.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.196 "adrfam": "ipv4", 00:24:10.196 "trsvcid": "$NVMF_PORT", 00:24:10.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.196 "hdgst": ${hdgst:-false}, 00:24:10.196 "ddgst": ${ddgst:-false} 00:24:10.196 }, 00:24:10.196 "method": "bdev_nvme_attach_controller" 00:24:10.196 } 00:24:10.196 EOF 00:24:10.196 )") 00:24:10.196 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.460 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.460 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.460 { 00:24:10.460 "params": { 00:24:10.460 "name": "Nvme$subsystem", 00:24:10.460 "trtype": "$TEST_TRANSPORT", 00:24:10.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.460 "adrfam": "ipv4", 00:24:10.460 "trsvcid": "$NVMF_PORT", 00:24:10.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.460 "hdgst": ${hdgst:-false}, 00:24:10.460 "ddgst": ${ddgst:-false} 00:24:10.460 }, 00:24:10.460 "method": "bdev_nvme_attach_controller" 00:24:10.460 } 00:24:10.460 EOF 00:24:10.460 )") 00:24:10.460 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.460 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.460 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.460 { 00:24:10.460 "params": { 00:24:10.460 "name": "Nvme$subsystem", 00:24:10.460 "trtype": "$TEST_TRANSPORT", 00:24:10.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.460 "adrfam": "ipv4", 00:24:10.460 "trsvcid": "$NVMF_PORT", 00:24:10.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.460 "hdgst": ${hdgst:-false}, 00:24:10.460 "ddgst": ${ddgst:-false} 00:24:10.460 }, 00:24:10.460 "method": "bdev_nvme_attach_controller" 00:24:10.460 } 00:24:10.460 EOF 00:24:10.460 )") 00:24:10.460 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.460 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.460 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.460 { 00:24:10.460 "params": { 00:24:10.460 "name": "Nvme$subsystem", 00:24:10.460 "trtype": "$TEST_TRANSPORT", 00:24:10.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.460 "adrfam": "ipv4", 00:24:10.460 "trsvcid": "$NVMF_PORT", 00:24:10.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.461 "hdgst": ${hdgst:-false}, 00:24:10.461 "ddgst": ${ddgst:-false} 00:24:10.461 }, 00:24:10.461 "method": "bdev_nvme_attach_controller" 00:24:10.461 } 00:24:10.461 EOF 00:24:10.461 )") 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.461 { 00:24:10.461 "params": { 00:24:10.461 "name": "Nvme$subsystem", 00:24:10.461 "trtype": "$TEST_TRANSPORT", 00:24:10.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.461 "adrfam": "ipv4", 00:24:10.461 "trsvcid": "$NVMF_PORT", 00:24:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.461 "hdgst": ${hdgst:-false}, 00:24:10.461 "ddgst": ${ddgst:-false} 00:24:10.461 }, 00:24:10.461 "method": "bdev_nvme_attach_controller" 00:24:10.461 } 00:24:10.461 EOF 00:24:10.461 )") 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.461 { 00:24:10.461 "params": { 00:24:10.461 "name": "Nvme$subsystem", 00:24:10.461 "trtype": "$TEST_TRANSPORT", 00:24:10.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.461 "adrfam": "ipv4", 00:24:10.461 "trsvcid": "$NVMF_PORT", 00:24:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.461 "hdgst": ${hdgst:-false}, 00:24:10.461 "ddgst": ${ddgst:-false} 00:24:10.461 }, 00:24:10.461 "method": "bdev_nvme_attach_controller" 00:24:10.461 } 00:24:10.461 EOF 00:24:10.461 )") 00:24:10.461 [2024-12-05 21:17:11.668748] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:24:10.461 [2024-12-05 21:17:11.668803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167960 ] 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.461 { 00:24:10.461 "params": { 00:24:10.461 "name": "Nvme$subsystem", 00:24:10.461 "trtype": "$TEST_TRANSPORT", 00:24:10.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.461 "adrfam": "ipv4", 00:24:10.461 "trsvcid": "$NVMF_PORT", 00:24:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.461 "hdgst": ${hdgst:-false}, 00:24:10.461 "ddgst": ${ddgst:-false} 00:24:10.461 }, 00:24:10.461 "method": "bdev_nvme_attach_controller" 00:24:10.461 } 00:24:10.461 EOF 00:24:10.461 )") 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.461 { 00:24:10.461 "params": { 00:24:10.461 "name": "Nvme$subsystem", 00:24:10.461 "trtype": "$TEST_TRANSPORT", 00:24:10.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.461 "adrfam": "ipv4", 00:24:10.461 "trsvcid": "$NVMF_PORT", 00:24:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.461 "hdgst": ${hdgst:-false}, 00:24:10.461 "ddgst": ${ddgst:-false} 00:24:10.461 }, 00:24:10.461 "method": "bdev_nvme_attach_controller" 00:24:10.461 } 00:24:10.461 EOF 00:24:10.461 )") 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.461 { 00:24:10.461 "params": { 00:24:10.461 "name": "Nvme$subsystem", 00:24:10.461 "trtype": "$TEST_TRANSPORT", 00:24:10.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.461 "adrfam": "ipv4", 00:24:10.461 "trsvcid": "$NVMF_PORT", 00:24:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.461 "hdgst": ${hdgst:-false}, 00:24:10.461 "ddgst": ${ddgst:-false} 00:24:10.461 }, 00:24:10.461 "method": "bdev_nvme_attach_controller" 00:24:10.461 } 00:24:10.461 EOF 00:24:10.461 )") 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:10.461 21:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:10.461 "params": { 00:24:10.461 "name": "Nvme1", 00:24:10.461 "trtype": "tcp", 00:24:10.461 "traddr": "10.0.0.2", 00:24:10.461 "adrfam": "ipv4", 00:24:10.461 "trsvcid": "4420", 00:24:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:10.461 "hdgst": false, 00:24:10.461 "ddgst": false 00:24:10.461 }, 00:24:10.461 "method": "bdev_nvme_attach_controller" 00:24:10.461 },{ 00:24:10.461 "params": { 00:24:10.461 "name": "Nvme2", 00:24:10.461 "trtype": "tcp", 00:24:10.461 "traddr": "10.0.0.2", 00:24:10.461 "adrfam": "ipv4", 00:24:10.461 "trsvcid": "4420", 00:24:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:10.461 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:10.461 "hdgst": false, 00:24:10.461 "ddgst": false 00:24:10.461 }, 00:24:10.461 "method": "bdev_nvme_attach_controller" 00:24:10.461 },{ 00:24:10.461 "params": { 00:24:10.461 "name": "Nvme3", 00:24:10.461 "trtype": "tcp", 00:24:10.461 "traddr": "10.0.0.2", 00:24:10.461 "adrfam": "ipv4", 00:24:10.461 "trsvcid": "4420", 00:24:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:10.461 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:10.461 "hdgst": false, 00:24:10.461 "ddgst": false 00:24:10.461 }, 00:24:10.461 "method": "bdev_nvme_attach_controller" 00:24:10.461 },{ 00:24:10.461 "params": { 00:24:10.461 "name": "Nvme4", 00:24:10.461 "trtype": "tcp", 00:24:10.461 "traddr": "10.0.0.2", 00:24:10.461 "adrfam": "ipv4", 00:24:10.461 "trsvcid": "4420", 00:24:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:10.461 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:10.461 "hdgst": false, 00:24:10.461 "ddgst": false 00:24:10.461 }, 00:24:10.461 "method": "bdev_nvme_attach_controller" 00:24:10.461 },{ 00:24:10.461 "params": { 00:24:10.461 "name": "Nvme5", 00:24:10.461 "trtype": "tcp", 00:24:10.461 "traddr": "10.0.0.2", 00:24:10.461 "adrfam": "ipv4", 00:24:10.461 "trsvcid": "4420", 00:24:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:10.461 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:10.461 "hdgst": false, 00:24:10.461 "ddgst": false 00:24:10.461 }, 00:24:10.461 "method": "bdev_nvme_attach_controller" 00:24:10.461 },{ 00:24:10.461 "params": { 00:24:10.461 "name": "Nvme6", 00:24:10.461 "trtype": "tcp", 00:24:10.461 "traddr": "10.0.0.2", 00:24:10.461 "adrfam": "ipv4", 00:24:10.461 "trsvcid": "4420", 00:24:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:10.461 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:10.461 "hdgst": false, 00:24:10.461 "ddgst": false 00:24:10.461 }, 00:24:10.461 "method": "bdev_nvme_attach_controller" 00:24:10.461 },{ 00:24:10.461 "params": { 00:24:10.461 "name": "Nvme7", 00:24:10.461 "trtype": "tcp", 00:24:10.461 "traddr": "10.0.0.2", 00:24:10.461 "adrfam": "ipv4", 00:24:10.461 "trsvcid": "4420", 00:24:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:10.461 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:10.461 "hdgst": false, 00:24:10.461 "ddgst": false 00:24:10.461 }, 00:24:10.461 "method": "bdev_nvme_attach_controller" 00:24:10.461 },{ 00:24:10.461 "params": { 00:24:10.461 "name": "Nvme8", 00:24:10.461 "trtype": "tcp", 00:24:10.461 "traddr": "10.0.0.2", 00:24:10.461 "adrfam": "ipv4", 00:24:10.461 "trsvcid": "4420", 00:24:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:10.461 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:10.461 "hdgst": false, 00:24:10.461 "ddgst": false 00:24:10.461 }, 00:24:10.461 "method": "bdev_nvme_attach_controller" 00:24:10.461 },{ 00:24:10.461 "params": { 00:24:10.461 "name": "Nvme9", 00:24:10.461 "trtype": "tcp", 00:24:10.461 "traddr": "10.0.0.2", 00:24:10.461 "adrfam": "ipv4", 00:24:10.461 "trsvcid": "4420", 00:24:10.461 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:10.461 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:10.461 "hdgst": false, 00:24:10.461 "ddgst": false 00:24:10.461 }, 00:24:10.461 "method": "bdev_nvme_attach_controller" 00:24:10.461 },{ 00:24:10.461 "params": { 00:24:10.461 "name": "Nvme10", 00:24:10.462 "trtype": "tcp", 00:24:10.462 "traddr": "10.0.0.2", 00:24:10.462 "adrfam": "ipv4", 00:24:10.462 "trsvcid": "4420", 00:24:10.462 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:10.462 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:10.462 "hdgst": false, 00:24:10.462 "ddgst": false 00:24:10.462 }, 00:24:10.462 "method": "bdev_nvme_attach_controller" 00:24:10.462 }' 00:24:10.462 [2024-12-05 21:17:11.749008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.462 [2024-12-05 21:17:11.784987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.847 Running I/O for 1 seconds... 00:24:13.048 1865.00 IOPS, 116.56 MiB/s 00:24:13.048 Latency(us) 00:24:13.048 [2024-12-05T20:17:14.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.048 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.048 Verification LBA range: start 0x0 length 0x400 00:24:13.048 Nvme1n1 : 1.17 219.60 13.73 0.00 0.00 288520.32 22282.24 253405.87 00:24:13.048 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.048 Verification LBA range: start 0x0 length 0x400 00:24:13.048 Nvme2n1 : 1.17 218.38 13.65 0.00 0.00 285364.48 19770.03 251658.24 00:24:13.048 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.048 Verification LBA range: start 0x0 length 0x400 00:24:13.048 Nvme3n1 : 1.07 239.42 14.96 0.00 0.00 254696.11 20862.29 248162.99 00:24:13.048 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.048 Verification LBA range: start 0x0 length 0x400 00:24:13.048 Nvme4n1 : 1.07 238.94 14.93 0.00 0.00 250465.71 14090.24 249910.61 00:24:13.048 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.048 Verification LBA range: start 0x0 length 0x400 00:24:13.048 Nvme5n1 : 1.18 217.48 13.59 0.00 0.00 271998.08 19988.48 253405.87 00:24:13.048 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.048 Verification LBA range: start 0x0 length 0x400 00:24:13.048 Nvme6n1 : 1.17 224.18 14.01 0.00 0.00 257711.77 3126.61 251658.24 00:24:13.048 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.048 Verification LBA range: start 0x0 length 0x400 00:24:13.048 Nvme7n1 : 1.19 269.23 16.83 0.00 0.00 212067.33 19005.44 253405.87 00:24:13.048 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.048 Verification LBA range: start 0x0 length 0x400 00:24:13.048 Nvme8n1 : 1.18 270.29 16.89 0.00 0.00 207283.54 23156.05 221948.59 00:24:13.048 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.048 Verification LBA range: start 0x0 length 0x400 00:24:13.048 Nvme9n1 : 1.19 268.66 16.79 0.00 0.00 204577.88 13216.43 248162.99 00:24:13.048 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.048 Verification LBA range: start 0x0 length 0x400 00:24:13.048 Nvme10n1 : 1.19 225.15 14.07 0.00 0.00 238739.22 3549.87 270882.13 00:24:13.048 [2024-12-05T20:17:14.485Z] =================================================================================================================== 00:24:13.048 [2024-12-05T20:17:14.485Z] Total : 2391.33 149.46 0.00 0.00 244416.21 3126.61 270882.13 00:24:13.048 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:24:13.048 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:13.048 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:13.048 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:13.048 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:13.048 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:13.048 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:24:13.048 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:13.049 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:24:13.049 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:13.049 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:13.049 rmmod nvme_tcp 00:24:13.049 rmmod nvme_fabrics 00:24:13.049 rmmod nvme_keyring 00:24:13.049 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:13.049 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:24:13.049 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:24:13.049 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2167206 ']' 00:24:13.049 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2167206 00:24:13.049 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2167206 ']' 00:24:13.049 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2167206 00:24:13.049 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:24:13.310 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.310 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2167206 00:24:13.310 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:13.310 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:13.310 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2167206' 00:24:13.310 killing process with pid 2167206 00:24:13.310 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2167206 00:24:13.310 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2167206 00:24:13.571 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:13.571 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:13.571 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:13.571 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:24:13.571 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:13.571 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:24:13.571 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:24:13.571 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:13.571 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:13.571 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.571 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.571 21:17:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.479 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:15.479 00:24:15.479 real 0m17.851s 00:24:15.479 user 0m33.847s 00:24:15.479 sys 0m7.603s 00:24:15.479 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.479 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:15.479 ************************************ 00:24:15.479 END TEST nvmf_shutdown_tc1 00:24:15.479 ************************************ 00:24:15.479 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:15.479 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:15.479 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:15.479 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:15.740 ************************************ 00:24:15.740 START TEST nvmf_shutdown_tc2 00:24:15.740 ************************************ 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:15.740 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:15.740 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:15.740 Found net devices under 0000:31:00.0: cvl_0_0 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:15.740 Found net devices under 0000:31:00.1: cvl_0_1 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.740 21:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.740 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.740 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.740 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:15.740 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:16.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:24:16.001 00:24:16.001 --- 10.0.0.2 ping statistics --- 00:24:16.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.001 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:24:16.001 00:24:16.001 --- 10.0.0.1 ping statistics --- 00:24:16.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.001 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2169081 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2169081 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2169081 ']' 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.001 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.002 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.002 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:16.002 21:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:16.002 [2024-12-05 21:17:17.363551] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:24:16.002 [2024-12-05 21:17:17.363620] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.261 [2024-12-05 21:17:17.465588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.261 [2024-12-05 21:17:17.499492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.261 [2024-12-05 21:17:17.499540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.261 [2024-12-05 21:17:17.499546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.261 [2024-12-05 21:17:17.499550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.261 [2024-12-05 21:17:17.499555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.261 [2024-12-05 21:17:17.500895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.261 [2024-12-05 21:17:17.501119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.261 [2024-12-05 21:17:17.501276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.261 [2024-12-05 21:17:17.501277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:16.831 [2024-12-05 21:17:18.209162] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:16.831 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:17.092 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:17.092 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:17.092 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:17.092 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.092 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.092 Malloc1 00:24:17.092 [2024-12-05 21:17:18.321468] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.092 Malloc2 00:24:17.092 Malloc3 00:24:17.092 Malloc4 00:24:17.092 Malloc5 00:24:17.092 Malloc6 00:24:17.352 Malloc7 00:24:17.352 Malloc8 00:24:17.352 Malloc9 00:24:17.352 Malloc10 00:24:17.352 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.352 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:17.352 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.352 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.352 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2169461 00:24:17.352 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2169461 /var/tmp/bdevperf.sock 00:24:17.352 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2169461 ']' 00:24:17.352 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.352 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.352 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.352 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:17.353 { 00:24:17.353 "params": { 00:24:17.353 "name": "Nvme$subsystem", 00:24:17.353 "trtype": "$TEST_TRANSPORT", 00:24:17.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.353 "adrfam": "ipv4", 00:24:17.353 "trsvcid": "$NVMF_PORT", 00:24:17.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.353 "hdgst": ${hdgst:-false}, 00:24:17.353 "ddgst": ${ddgst:-false} 00:24:17.353 }, 00:24:17.353 "method": "bdev_nvme_attach_controller" 00:24:17.353 } 00:24:17.353 EOF 00:24:17.353 )") 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:17.353 { 00:24:17.353 "params": { 00:24:17.353 "name": "Nvme$subsystem", 00:24:17.353 "trtype": "$TEST_TRANSPORT", 00:24:17.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.353 "adrfam": "ipv4", 00:24:17.353 "trsvcid": "$NVMF_PORT", 00:24:17.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.353 "hdgst": ${hdgst:-false}, 00:24:17.353 "ddgst": ${ddgst:-false} 00:24:17.353 }, 00:24:17.353 "method": "bdev_nvme_attach_controller" 00:24:17.353 } 00:24:17.353 EOF 00:24:17.353 )") 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:17.353 { 00:24:17.353 "params": { 00:24:17.353 "name": "Nvme$subsystem", 00:24:17.353 "trtype": "$TEST_TRANSPORT", 00:24:17.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.353 "adrfam": "ipv4", 00:24:17.353 "trsvcid": "$NVMF_PORT", 00:24:17.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.353 "hdgst": ${hdgst:-false}, 00:24:17.353 "ddgst": ${ddgst:-false} 00:24:17.353 }, 00:24:17.353 "method": "bdev_nvme_attach_controller" 00:24:17.353 } 00:24:17.353 EOF 00:24:17.353 )") 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:17.353 { 00:24:17.353 "params": { 00:24:17.353 "name": "Nvme$subsystem", 00:24:17.353 "trtype": "$TEST_TRANSPORT", 00:24:17.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.353 "adrfam": "ipv4", 00:24:17.353 "trsvcid": "$NVMF_PORT", 00:24:17.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.353 "hdgst": ${hdgst:-false}, 00:24:17.353 "ddgst": ${ddgst:-false} 00:24:17.353 }, 00:24:17.353 "method": "bdev_nvme_attach_controller" 00:24:17.353 } 00:24:17.353 EOF 00:24:17.353 )") 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:17.353 { 00:24:17.353 "params": { 00:24:17.353 "name": "Nvme$subsystem", 00:24:17.353 "trtype": "$TEST_TRANSPORT", 00:24:17.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.353 "adrfam": "ipv4", 00:24:17.353 "trsvcid": "$NVMF_PORT", 00:24:17.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.353 "hdgst": ${hdgst:-false}, 00:24:17.353 "ddgst": ${ddgst:-false} 00:24:17.353 }, 00:24:17.353 "method": "bdev_nvme_attach_controller" 00:24:17.353 } 00:24:17.353 EOF 00:24:17.353 )") 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:17.353 { 00:24:17.353 "params": { 00:24:17.353 "name": "Nvme$subsystem", 00:24:17.353 "trtype": "$TEST_TRANSPORT", 00:24:17.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.353 "adrfam": "ipv4", 00:24:17.353 "trsvcid": "$NVMF_PORT", 00:24:17.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.353 "hdgst": ${hdgst:-false}, 00:24:17.353 "ddgst": ${ddgst:-false} 00:24:17.353 }, 00:24:17.353 "method": "bdev_nvme_attach_controller" 00:24:17.353 } 00:24:17.353 EOF 00:24:17.353 )") 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:17.353 [2024-12-05 21:17:18.770238] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:24:17.353 [2024-12-05 21:17:18.770294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169461 ] 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:17.353 { 00:24:17.353 "params": { 00:24:17.353 "name": "Nvme$subsystem", 00:24:17.353 "trtype": "$TEST_TRANSPORT", 00:24:17.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.353 "adrfam": "ipv4", 00:24:17.353 "trsvcid": "$NVMF_PORT", 00:24:17.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.353 "hdgst": ${hdgst:-false}, 00:24:17.353 "ddgst": ${ddgst:-false} 00:24:17.353 }, 00:24:17.353 "method": "bdev_nvme_attach_controller" 00:24:17.353 } 00:24:17.353 EOF 00:24:17.353 )") 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:17.353 { 00:24:17.353 "params": { 00:24:17.353 "name": "Nvme$subsystem", 00:24:17.353 "trtype": "$TEST_TRANSPORT", 00:24:17.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.353 "adrfam": "ipv4", 00:24:17.353 "trsvcid": "$NVMF_PORT", 00:24:17.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.353 "hdgst": ${hdgst:-false}, 00:24:17.353 "ddgst": ${ddgst:-false} 00:24:17.353 }, 00:24:17.353 "method": "bdev_nvme_attach_controller" 00:24:17.353 } 00:24:17.353 EOF 00:24:17.353 )") 00:24:17.353 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:17.614 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:17.614 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:17.614 { 00:24:17.614 "params": { 00:24:17.614 "name": "Nvme$subsystem", 00:24:17.614 "trtype": "$TEST_TRANSPORT", 00:24:17.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.614 "adrfam": "ipv4", 00:24:17.614 "trsvcid": "$NVMF_PORT", 00:24:17.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.614 "hdgst": ${hdgst:-false}, 00:24:17.614 "ddgst": ${ddgst:-false} 00:24:17.614 }, 00:24:17.614 "method": "bdev_nvme_attach_controller" 00:24:17.614 } 00:24:17.614 EOF 00:24:17.614 )") 00:24:17.614 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:17.614 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:17.614 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:17.614 { 00:24:17.614 "params": { 00:24:17.614 "name": "Nvme$subsystem", 00:24:17.614 "trtype": "$TEST_TRANSPORT", 00:24:17.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.614 "adrfam": "ipv4", 00:24:17.614 "trsvcid": "$NVMF_PORT", 00:24:17.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.614 "hdgst": ${hdgst:-false}, 00:24:17.614 "ddgst": ${ddgst:-false} 00:24:17.614 }, 00:24:17.614 "method": "bdev_nvme_attach_controller" 00:24:17.614 } 00:24:17.614 EOF 00:24:17.614 )") 00:24:17.614 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:17.614 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:24:17.614 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:24:17.614 21:17:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:17.614 "params": { 00:24:17.614 "name": "Nvme1", 00:24:17.614 "trtype": "tcp", 00:24:17.614 "traddr": "10.0.0.2", 00:24:17.614 "adrfam": "ipv4", 00:24:17.614 "trsvcid": "4420", 00:24:17.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:17.614 "hdgst": false, 00:24:17.614 "ddgst": false 00:24:17.614 }, 00:24:17.614 "method": "bdev_nvme_attach_controller" 00:24:17.614 },{ 00:24:17.614 "params": { 00:24:17.614 "name": "Nvme2", 00:24:17.614 "trtype": "tcp", 00:24:17.614 "traddr": "10.0.0.2", 00:24:17.614 "adrfam": "ipv4", 00:24:17.614 "trsvcid": "4420", 00:24:17.614 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:17.614 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:17.614 "hdgst": false, 00:24:17.614 "ddgst": false 00:24:17.614 }, 00:24:17.614 "method": "bdev_nvme_attach_controller" 00:24:17.614 },{ 00:24:17.614 "params": { 00:24:17.614 "name": "Nvme3", 00:24:17.614 "trtype": "tcp", 00:24:17.614 "traddr": "10.0.0.2", 00:24:17.614 "adrfam": "ipv4", 00:24:17.614 "trsvcid": "4420", 00:24:17.614 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:17.614 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:17.614 "hdgst": false, 00:24:17.614 "ddgst": false 00:24:17.614 }, 00:24:17.614 "method": "bdev_nvme_attach_controller" 00:24:17.614 },{ 00:24:17.614 "params": { 00:24:17.614 "name": "Nvme4", 00:24:17.614 "trtype": "tcp", 00:24:17.614 "traddr": "10.0.0.2", 00:24:17.614 "adrfam": "ipv4", 00:24:17.614 "trsvcid": "4420", 00:24:17.614 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:17.614 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:17.614 "hdgst": false, 00:24:17.614 "ddgst": false 00:24:17.614 }, 00:24:17.614 "method": "bdev_nvme_attach_controller" 00:24:17.614 },{ 00:24:17.614 "params": { 00:24:17.614 "name": "Nvme5", 00:24:17.614 "trtype": "tcp", 00:24:17.614 "traddr": "10.0.0.2", 00:24:17.614 "adrfam": "ipv4", 00:24:17.614 "trsvcid": "4420", 00:24:17.614 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:17.614 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:17.614 "hdgst": false, 00:24:17.614 "ddgst": false 00:24:17.614 }, 00:24:17.614 "method": "bdev_nvme_attach_controller" 00:24:17.614 },{ 00:24:17.614 "params": { 00:24:17.614 "name": "Nvme6", 00:24:17.614 "trtype": "tcp", 00:24:17.614 "traddr": "10.0.0.2", 00:24:17.614 "adrfam": "ipv4", 00:24:17.614 "trsvcid": "4420", 00:24:17.614 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:17.614 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:17.615 "hdgst": false, 00:24:17.615 "ddgst": false 00:24:17.615 }, 00:24:17.615 "method": "bdev_nvme_attach_controller" 00:24:17.615 },{ 00:24:17.615 "params": { 00:24:17.615 "name": "Nvme7", 00:24:17.615 "trtype": "tcp", 00:24:17.615 "traddr": "10.0.0.2", 00:24:17.615 "adrfam": "ipv4", 00:24:17.615 "trsvcid": "4420", 00:24:17.615 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:17.615 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:17.615 "hdgst": false, 00:24:17.615 "ddgst": false 00:24:17.615 }, 00:24:17.615 "method": "bdev_nvme_attach_controller" 00:24:17.615 },{ 00:24:17.615 "params": { 00:24:17.615 "name": "Nvme8", 00:24:17.615 "trtype": "tcp", 00:24:17.615 "traddr": "10.0.0.2", 00:24:17.615 "adrfam": "ipv4", 00:24:17.615 "trsvcid": "4420", 00:24:17.615 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:17.615 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:17.615 "hdgst": false, 00:24:17.615 "ddgst": false 00:24:17.615 }, 00:24:17.615 "method": "bdev_nvme_attach_controller" 00:24:17.615 },{ 00:24:17.615 "params": { 00:24:17.615 "name": "Nvme9", 00:24:17.615 "trtype": "tcp", 00:24:17.615 "traddr": "10.0.0.2", 00:24:17.615 "adrfam": "ipv4", 00:24:17.615 "trsvcid": "4420", 00:24:17.615 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:17.615 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:17.615 "hdgst": false, 00:24:17.615 "ddgst": false 00:24:17.615 }, 00:24:17.615 "method": "bdev_nvme_attach_controller" 00:24:17.615 },{ 00:24:17.615 "params": { 00:24:17.615 "name": "Nvme10", 00:24:17.615 "trtype": "tcp", 00:24:17.615 "traddr": "10.0.0.2", 00:24:17.615 "adrfam": "ipv4", 00:24:17.615 "trsvcid": "4420", 00:24:17.615 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:17.615 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:17.615 "hdgst": false, 00:24:17.615 "ddgst": false 00:24:17.615 }, 00:24:17.615 "method": "bdev_nvme_attach_controller" 00:24:17.615 }' 00:24:17.615 [2024-12-05 21:17:18.849025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.615 [2024-12-05 21:17:18.885400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.526 Running I/O for 10 seconds... 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:19.526 21:17:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:19.786 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:19.786 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:19.786 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:19.787 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:19.787 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.787 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.787 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.787 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:24:19.787 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:24:19.787 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:20.048 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:20.048 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:20.048 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:20.048 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:20.048 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.048 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.048 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.048 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:24:20.048 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:24:20.049 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:24:20.049 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:24:20.049 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:24:20.049 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2169461 00:24:20.049 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2169461 ']' 00:24:20.049 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2169461 00:24:20.049 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:20.049 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.049 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2169461 00:24:20.049 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:20.049 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:20.049 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2169461' 00:24:20.049 killing process with pid 2169461 00:24:20.049 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2169461 00:24:20.049 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2169461 00:24:20.310 Received shutdown signal, test time was about 0.955153 seconds 00:24:20.310 00:24:20.310 Latency(us) 00:24:20.310 [2024-12-05T20:17:21.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.310 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.310 Verification LBA range: start 0x0 length 0x400 00:24:20.310 Nvme1n1 : 0.95 270.07 16.88 0.00 0.00 233392.21 14090.24 248162.99 00:24:20.310 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.310 Verification LBA range: start 0x0 length 0x400 00:24:20.310 Nvme2n1 : 0.93 206.91 12.93 0.00 0.00 298945.14 18568.53 253405.87 00:24:20.310 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.310 Verification LBA range: start 0x0 length 0x400 00:24:20.310 Nvme3n1 : 0.95 269.81 16.86 0.00 0.00 224406.61 18350.08 265639.25 00:24:20.310 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.310 Verification LBA range: start 0x0 length 0x400 00:24:20.310 Nvme4n1 : 0.91 210.62 13.16 0.00 0.00 280761.46 15728.64 249910.61 00:24:20.310 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.310 Verification LBA range: start 0x0 length 0x400 00:24:20.310 Nvme5n1 : 0.92 207.89 12.99 0.00 0.00 278223.36 21845.33 235929.60 00:24:20.310 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.310 Verification LBA range: start 0x0 length 0x400 00:24:20.310 Nvme6n1 : 0.94 271.35 16.96 0.00 0.00 208628.48 29054.29 218453.33 00:24:20.310 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.310 Verification LBA range: start 0x0 length 0x400 00:24:20.310 Nvme7n1 : 0.94 273.31 17.08 0.00 0.00 201983.57 15837.87 248162.99 00:24:20.310 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.310 Verification LBA range: start 0x0 length 0x400 00:24:20.310 Nvme8n1 : 0.95 266.17 16.64 0.00 0.00 202938.85 17476.27 244667.73 00:24:20.310 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.310 Verification LBA range: start 0x0 length 0x400 00:24:20.310 Nvme9n1 : 0.94 208.77 13.05 0.00 0.00 250278.70 6034.77 258648.75 00:24:20.310 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:20.310 Verification LBA range: start 0x0 length 0x400 00:24:20.310 Nvme10n1 : 0.93 205.43 12.84 0.00 0.00 249473.42 20316.16 267386.88 00:24:20.310 [2024-12-05T20:17:21.747Z] =================================================================================================================== 00:24:20.310 [2024-12-05T20:17:21.747Z] Total : 2390.32 149.40 0.00 0.00 238865.18 6034.77 267386.88 00:24:20.310 21:17:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:24:21.254 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2169081 00:24:21.254 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:24:21.254 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:21.254 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:21.254 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:21.254 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:21.254 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:21.254 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:24:21.254 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:21.254 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:24:21.254 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:21.254 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:21.254 rmmod nvme_tcp 00:24:21.254 rmmod nvme_fabrics 00:24:21.254 rmmod nvme_keyring 00:24:21.515 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:21.515 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:24:21.515 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:24:21.515 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2169081 ']' 00:24:21.515 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2169081 00:24:21.515 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2169081 ']' 00:24:21.515 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2169081 00:24:21.515 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:21.516 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.516 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2169081 00:24:21.516 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:21.516 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:21.516 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2169081' 00:24:21.516 killing process with pid 2169081 00:24:21.516 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2169081 00:24:21.516 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2169081 00:24:21.778 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:21.778 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:21.778 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:21.778 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:24:21.778 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:21.778 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:24:21.778 21:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:24:21.778 21:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.778 21:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:21.778 21:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.778 21:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.778 21:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.693 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:23.693 00:24:23.693 real 0m8.151s 00:24:23.693 user 0m24.942s 00:24:23.693 sys 0m1.354s 00:24:23.693 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.693 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.693 ************************************ 00:24:23.693 END TEST nvmf_shutdown_tc2 00:24:23.693 ************************************ 00:24:23.693 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:23.693 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:23.693 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.693 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:23.953 ************************************ 00:24:23.953 START TEST nvmf_shutdown_tc3 00:24:23.953 ************************************ 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:23.953 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:23.953 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:23.953 Found net devices under 0000:31:00.0: cvl_0_0 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.953 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:23.954 Found net devices under 0000:31:00.1: cvl_0_1 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:23.954 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:24.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:24:24.214 00:24:24.214 --- 10.0.0.2 ping statistics --- 00:24:24.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.214 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:24:24.214 00:24:24.214 --- 10.0.0.1 ping statistics --- 00:24:24.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.214 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2170928 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2170928 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2170928 ']' 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.214 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.215 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.215 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.215 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:24.215 [2024-12-05 21:17:25.609223] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:24:24.215 [2024-12-05 21:17:25.609270] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.475 [2024-12-05 21:17:25.684682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:24.475 [2024-12-05 21:17:25.714542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.475 [2024-12-05 21:17:25.714572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.475 [2024-12-05 21:17:25.714578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.475 [2024-12-05 21:17:25.714583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.475 [2024-12-05 21:17:25.714587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.475 [2024-12-05 21:17:25.716095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.475 [2024-12-05 21:17:25.716309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:24.475 [2024-12-05 21:17:25.716425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.475 [2024-12-05 21:17:25.716425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:24.475 [2024-12-05 21:17:25.852528] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:24.475 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:24.735 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:24.735 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:24.735 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:24.735 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.735 21:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:24.735 Malloc1 00:24:24.735 [2024-12-05 21:17:25.960829] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.735 Malloc2 00:24:24.735 Malloc3 00:24:24.735 Malloc4 00:24:24.735 Malloc5 00:24:24.735 Malloc6 00:24:24.735 Malloc7 00:24:24.995 Malloc8 00:24:24.995 Malloc9 00:24:24.995 Malloc10 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2171060 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2171060 /var/tmp/bdevperf.sock 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2171060 ']' 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.995 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.995 { 00:24:24.995 "params": { 00:24:24.995 "name": "Nvme$subsystem", 00:24:24.995 "trtype": "$TEST_TRANSPORT", 00:24:24.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.995 "adrfam": "ipv4", 00:24:24.995 "trsvcid": "$NVMF_PORT", 00:24:24.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.996 "hdgst": ${hdgst:-false}, 00:24:24.996 "ddgst": ${ddgst:-false} 00:24:24.996 }, 00:24:24.996 "method": "bdev_nvme_attach_controller" 00:24:24.996 } 00:24:24.996 EOF 00:24:24.996 )") 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.996 { 00:24:24.996 "params": { 00:24:24.996 "name": "Nvme$subsystem", 00:24:24.996 "trtype": "$TEST_TRANSPORT", 00:24:24.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.996 "adrfam": "ipv4", 00:24:24.996 "trsvcid": "$NVMF_PORT", 00:24:24.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.996 "hdgst": ${hdgst:-false}, 00:24:24.996 "ddgst": ${ddgst:-false} 00:24:24.996 }, 00:24:24.996 "method": "bdev_nvme_attach_controller" 00:24:24.996 } 00:24:24.996 EOF 00:24:24.996 )") 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.996 { 00:24:24.996 "params": { 00:24:24.996 "name": "Nvme$subsystem", 00:24:24.996 "trtype": "$TEST_TRANSPORT", 00:24:24.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.996 "adrfam": "ipv4", 00:24:24.996 "trsvcid": "$NVMF_PORT", 00:24:24.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.996 "hdgst": ${hdgst:-false}, 00:24:24.996 "ddgst": ${ddgst:-false} 00:24:24.996 }, 00:24:24.996 "method": "bdev_nvme_attach_controller" 00:24:24.996 } 00:24:24.996 EOF 00:24:24.996 )") 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.996 { 00:24:24.996 "params": { 00:24:24.996 "name": "Nvme$subsystem", 00:24:24.996 "trtype": "$TEST_TRANSPORT", 00:24:24.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.996 "adrfam": "ipv4", 00:24:24.996 "trsvcid": "$NVMF_PORT", 00:24:24.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.996 "hdgst": ${hdgst:-false}, 00:24:24.996 "ddgst": ${ddgst:-false} 00:24:24.996 }, 00:24:24.996 "method": "bdev_nvme_attach_controller" 00:24:24.996 } 00:24:24.996 EOF 00:24:24.996 )") 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.996 { 00:24:24.996 "params": { 00:24:24.996 "name": "Nvme$subsystem", 00:24:24.996 "trtype": "$TEST_TRANSPORT", 00:24:24.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.996 "adrfam": "ipv4", 00:24:24.996 "trsvcid": "$NVMF_PORT", 00:24:24.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.996 "hdgst": ${hdgst:-false}, 00:24:24.996 "ddgst": ${ddgst:-false} 00:24:24.996 }, 00:24:24.996 "method": "bdev_nvme_attach_controller" 00:24:24.996 } 00:24:24.996 EOF 00:24:24.996 )") 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.996 { 00:24:24.996 "params": { 00:24:24.996 "name": "Nvme$subsystem", 00:24:24.996 "trtype": "$TEST_TRANSPORT", 00:24:24.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.996 "adrfam": "ipv4", 00:24:24.996 "trsvcid": "$NVMF_PORT", 00:24:24.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.996 "hdgst": ${hdgst:-false}, 00:24:24.996 "ddgst": ${ddgst:-false} 00:24:24.996 }, 00:24:24.996 "method": "bdev_nvme_attach_controller" 00:24:24.996 } 00:24:24.996 EOF 00:24:24.996 )") 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:24.996 [2024-12-05 21:17:26.421624] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:24:24.996 [2024-12-05 21:17:26.421676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171060 ] 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:24.996 { 00:24:24.996 "params": { 00:24:24.996 "name": "Nvme$subsystem", 00:24:24.996 "trtype": "$TEST_TRANSPORT", 00:24:24.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.996 "adrfam": "ipv4", 00:24:24.996 "trsvcid": "$NVMF_PORT", 00:24:24.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.996 "hdgst": ${hdgst:-false}, 00:24:24.996 "ddgst": ${ddgst:-false} 00:24:24.996 }, 00:24:24.996 "method": "bdev_nvme_attach_controller" 00:24:24.996 } 00:24:24.996 EOF 00:24:24.996 )") 00:24:24.996 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:25.255 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:25.255 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:25.255 { 00:24:25.255 "params": { 00:24:25.255 "name": "Nvme$subsystem", 00:24:25.255 "trtype": "$TEST_TRANSPORT", 00:24:25.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.255 "adrfam": "ipv4", 00:24:25.255 "trsvcid": "$NVMF_PORT", 00:24:25.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.255 "hdgst": ${hdgst:-false}, 00:24:25.255 "ddgst": ${ddgst:-false} 00:24:25.255 }, 00:24:25.255 "method": "bdev_nvme_attach_controller" 00:24:25.255 } 00:24:25.255 EOF 00:24:25.255 )") 00:24:25.255 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:25.255 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:25.255 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:25.255 { 00:24:25.255 "params": { 00:24:25.255 "name": "Nvme$subsystem", 00:24:25.255 "trtype": "$TEST_TRANSPORT", 00:24:25.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.255 "adrfam": "ipv4", 00:24:25.255 "trsvcid": "$NVMF_PORT", 00:24:25.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.256 "hdgst": ${hdgst:-false}, 00:24:25.256 "ddgst": ${ddgst:-false} 00:24:25.256 }, 00:24:25.256 "method": "bdev_nvme_attach_controller" 00:24:25.256 } 00:24:25.256 EOF 00:24:25.256 )") 00:24:25.256 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:25.256 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:25.256 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:25.256 { 00:24:25.256 "params": { 00:24:25.256 "name": "Nvme$subsystem", 00:24:25.256 "trtype": "$TEST_TRANSPORT", 00:24:25.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.256 "adrfam": "ipv4", 00:24:25.256 "trsvcid": "$NVMF_PORT", 00:24:25.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.256 "hdgst": ${hdgst:-false}, 00:24:25.256 "ddgst": ${ddgst:-false} 00:24:25.256 }, 00:24:25.256 "method": "bdev_nvme_attach_controller" 00:24:25.256 } 00:24:25.256 EOF 00:24:25.256 )") 00:24:25.256 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:25.256 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:24:25.256 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:24:25.256 21:17:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:25.256 "params": { 00:24:25.256 "name": "Nvme1", 00:24:25.256 "trtype": "tcp", 00:24:25.256 "traddr": "10.0.0.2", 00:24:25.256 "adrfam": "ipv4", 00:24:25.256 "trsvcid": "4420", 00:24:25.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:25.256 "hdgst": false, 00:24:25.256 "ddgst": false 00:24:25.256 }, 00:24:25.256 "method": "bdev_nvme_attach_controller" 00:24:25.256 },{ 00:24:25.256 "params": { 00:24:25.256 "name": "Nvme2", 00:24:25.256 "trtype": "tcp", 00:24:25.256 "traddr": "10.0.0.2", 00:24:25.256 "adrfam": "ipv4", 00:24:25.256 "trsvcid": "4420", 00:24:25.256 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:25.256 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:25.256 "hdgst": false, 00:24:25.256 "ddgst": false 00:24:25.256 }, 00:24:25.256 "method": "bdev_nvme_attach_controller" 00:24:25.256 },{ 00:24:25.256 "params": { 00:24:25.256 "name": "Nvme3", 00:24:25.256 "trtype": "tcp", 00:24:25.256 "traddr": "10.0.0.2", 00:24:25.256 "adrfam": "ipv4", 00:24:25.256 "trsvcid": "4420", 00:24:25.256 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:25.256 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:25.256 "hdgst": false, 00:24:25.256 "ddgst": false 00:24:25.256 }, 00:24:25.256 "method": "bdev_nvme_attach_controller" 00:24:25.256 },{ 00:24:25.256 "params": { 00:24:25.256 "name": "Nvme4", 00:24:25.256 "trtype": "tcp", 00:24:25.256 "traddr": "10.0.0.2", 00:24:25.256 "adrfam": "ipv4", 00:24:25.256 "trsvcid": "4420", 00:24:25.256 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:25.256 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:25.256 "hdgst": false, 00:24:25.256 "ddgst": false 00:24:25.256 }, 00:24:25.256 "method": "bdev_nvme_attach_controller" 00:24:25.256 },{ 00:24:25.256 "params": { 00:24:25.256 "name": "Nvme5", 00:24:25.256 "trtype": "tcp", 00:24:25.256 "traddr": "10.0.0.2", 00:24:25.256 "adrfam": "ipv4", 00:24:25.256 "trsvcid": "4420", 00:24:25.256 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:25.256 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:25.256 "hdgst": false, 00:24:25.256 "ddgst": false 00:24:25.256 }, 00:24:25.256 "method": "bdev_nvme_attach_controller" 00:24:25.256 },{ 00:24:25.256 "params": { 00:24:25.256 "name": "Nvme6", 00:24:25.256 "trtype": "tcp", 00:24:25.256 "traddr": "10.0.0.2", 00:24:25.256 "adrfam": "ipv4", 00:24:25.256 "trsvcid": "4420", 00:24:25.256 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:25.256 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:25.256 "hdgst": false, 00:24:25.256 "ddgst": false 00:24:25.256 }, 00:24:25.256 "method": "bdev_nvme_attach_controller" 00:24:25.256 },{ 00:24:25.256 "params": { 00:24:25.256 "name": "Nvme7", 00:24:25.256 "trtype": "tcp", 00:24:25.256 "traddr": "10.0.0.2", 00:24:25.256 "adrfam": "ipv4", 00:24:25.256 "trsvcid": "4420", 00:24:25.256 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:25.256 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:25.256 "hdgst": false, 00:24:25.256 "ddgst": false 00:24:25.256 }, 00:24:25.256 "method": "bdev_nvme_attach_controller" 00:24:25.256 },{ 00:24:25.256 "params": { 00:24:25.256 "name": "Nvme8", 00:24:25.256 "trtype": "tcp", 00:24:25.256 "traddr": "10.0.0.2", 00:24:25.256 "adrfam": "ipv4", 00:24:25.256 "trsvcid": "4420", 00:24:25.256 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:25.256 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:25.256 "hdgst": false, 00:24:25.256 "ddgst": false 00:24:25.256 }, 00:24:25.256 "method": "bdev_nvme_attach_controller" 00:24:25.256 },{ 00:24:25.256 "params": { 00:24:25.256 "name": "Nvme9", 00:24:25.256 "trtype": "tcp", 00:24:25.256 "traddr": "10.0.0.2", 00:24:25.256 "adrfam": "ipv4", 00:24:25.256 "trsvcid": "4420", 00:24:25.256 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:25.256 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:25.256 "hdgst": false, 00:24:25.256 "ddgst": false 00:24:25.256 }, 00:24:25.256 "method": "bdev_nvme_attach_controller" 00:24:25.256 },{ 00:24:25.256 "params": { 00:24:25.256 "name": "Nvme10", 00:24:25.256 "trtype": "tcp", 00:24:25.256 "traddr": "10.0.0.2", 00:24:25.256 "adrfam": "ipv4", 00:24:25.256 "trsvcid": "4420", 00:24:25.256 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:25.256 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:25.256 "hdgst": false, 00:24:25.256 "ddgst": false 00:24:25.256 }, 00:24:25.256 "method": "bdev_nvme_attach_controller" 00:24:25.256 }' 00:24:25.256 [2024-12-05 21:17:26.500540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.256 [2024-12-05 21:17:26.536987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.639 Running I/O for 10 seconds... 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:26.900 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:27.160 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:27.160 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:27.160 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:27.160 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:27.160 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.160 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:27.422 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.422 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:24:27.422 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:24:27.422 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:27.698 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:27.698 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:27.698 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:27.698 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:27.698 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.698 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:27.698 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.698 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:24:27.698 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:24:27.698 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:24:27.698 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:24:27.698 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:24:27.698 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2170928 00:24:27.698 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2170928 ']' 00:24:27.699 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2170928 00:24:27.699 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:24:27.699 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.699 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2170928 00:24:27.699 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:27.699 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:27.699 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2170928' 00:24:27.699 killing process with pid 2170928 00:24:27.699 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2170928 00:24:27.699 21:17:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2170928 00:24:27.699 [2024-12-05 21:17:28.988045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.988466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8c90 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.989816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.989848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.989854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.989859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.989868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.989873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.989878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.989884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.989889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.989893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.989898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.699 [2024-12-05 21:17:28.989903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.989999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.990154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40730 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.700 [2024-12-05 21:17:28.991931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.991936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.991941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.991947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.991951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.991957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.991962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.991966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.991971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.991976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.991980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.991985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.991990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.991994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.991999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.992077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9180 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.994996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.701 [2024-12-05 21:17:28.995164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.995169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.995174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.995179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.995183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.995188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.995193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.995197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.995202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9650 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9b40 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.996998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.997003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.997008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.997013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.997022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.997027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.997032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.702 [2024-12-05 21:17:28.997037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.997232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9ec0 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.703 [2024-12-05 21:17:28.998547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.998716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eca710 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecac00 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.704 [2024-12-05 21:17:28.999721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:28.999726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:28.999730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:28.999735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:28.999740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:28.999745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:28.999749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:28.999755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:28.999759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:28.999764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:28.999769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:28.999774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:28.999780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:28.999784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:29.007263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f610 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:29.007388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed6cc0 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:29.007488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60430 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:29.007579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60230 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:29.007666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90b50 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:29.007761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa51960 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:29.007850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa53b10 is same with the state(6) to be set 00:24:27.705 [2024-12-05 21:17:29.007946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.705 [2024-12-05 21:17:29.007973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.705 [2024-12-05 21:17:29.007981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.706 [2024-12-05 21:17:29.007989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.007997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.706 [2024-12-05 21:17:29.008007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa625e0 is same with the state(6) to be set 00:24:27.706 [2024-12-05 21:17:29.008037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.706 [2024-12-05 21:17:29.008047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.706 [2024-12-05 21:17:29.008065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.706 [2024-12-05 21:17:29.008080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.706 [2024-12-05 21:17:29.008096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed72f0 is same with the state(6) to be set 00:24:27.706 [2024-12-05 21:17:29.008713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.008736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.008761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.008778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.008796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.008813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.008830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.008848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.008875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.008893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.008910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.008927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with [2024-12-05 21:17:29.008937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1the state(6) to be set 00:24:27.706 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.008949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40260 is same with the state(6) to be set 00:24:27.706 [2024-12-05 21:17:29.008959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.008967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.008983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.008994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.706 [2024-12-05 21:17:29.009300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.706 [2024-12-05 21:17:29.009310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.009851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.009878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.707 [2024-12-05 21:17:29.009988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.010000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.010012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.010020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.707 [2024-12-05 21:17:29.010029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.707 [2024-12-05 21:17:29.010036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.010563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.010570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.018095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.018126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.018139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.018147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.018157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.018165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.018175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.018183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.018193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.018201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.018210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.018218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.018228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.018235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.018245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.708 [2024-12-05 21:17:29.018254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.708 [2024-12-05 21:17:29.018268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.018647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97f610 (9): Bad file descriptor 00:24:27.709 [2024-12-05 21:17:29.018919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed6cc0 (9): Bad file descriptor 00:24:27.709 [2024-12-05 21:17:29.018950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.709 [2024-12-05 21:17:29.018961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.709 [2024-12-05 21:17:29.018977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.018990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.709 [2024-12-05 21:17:29.018998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.019006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.709 [2024-12-05 21:17:29.019015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.019022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe91dd0 is same with the state(6) to be set 00:24:27.709 [2024-12-05 21:17:29.019041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa60430 (9): Bad file descriptor 00:24:27.709 [2024-12-05 21:17:29.019059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa60230 (9): Bad file descriptor 00:24:27.709 [2024-12-05 21:17:29.019076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe90b50 (9): Bad file descriptor 00:24:27.709 [2024-12-05 21:17:29.019091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa51960 (9): Bad file descriptor 00:24:27.709 [2024-12-05 21:17:29.019109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa53b10 (9): Bad file descriptor 00:24:27.709 [2024-12-05 21:17:29.019124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa625e0 (9): Bad file descriptor 00:24:27.709 [2024-12-05 21:17:29.019141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed72f0 (9): Bad file descriptor 00:24:27.709 [2024-12-05 21:17:29.019355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.019371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.019386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.019394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.019404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.019413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.019423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.019431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.019442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.019450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.019460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.019468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.019478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.019486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.019500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.019509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.019519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.019527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.709 [2024-12-05 21:17:29.019537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.709 [2024-12-05 21:17:29.019545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.019988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.019999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.020007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.020016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.020025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.020035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.020044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.020054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.020061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.020071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.020080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.020089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.020098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.020107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.020115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.020125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.020133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.020142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.020150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.020160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.020167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.020177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.020187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.020197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.020205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.020215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.020223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.020233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.020241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.020251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.710 [2024-12-05 21:17:29.020259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.710 [2024-12-05 21:17:29.020269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.020278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.020288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.020295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.020305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.020313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.020323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.020331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.020342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.020350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.020360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.020368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.020378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.020386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.020395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.020403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.020415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.020423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.020434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.020442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.020452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.020460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.020469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.020478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.020488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.020497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.020507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.020515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.020525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.020534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.024576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:27.711 [2024-12-05 21:17:29.024616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:24:27.711 [2024-12-05 21:17:29.025524] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:27.711 [2024-12-05 21:17:29.025553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:24:27.711 [2024-12-05 21:17:29.026125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-12-05 21:17:29.026165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe90b50 with addr=10.0.0.2, port=4420 00:24:27.711 [2024-12-05 21:17:29.026177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90b50 is same with the state(6) to be set 00:24:27.711 [2024-12-05 21:17:29.026521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-12-05 21:17:29.026535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed6cc0 with addr=10.0.0.2, port=4420 00:24:27.711 [2024-12-05 21:17:29.026543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed6cc0 is same with the state(6) to be set 00:24:27.711 [2024-12-05 21:17:29.026595] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:27.711 [2024-12-05 21:17:29.026638] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:27.711 [2024-12-05 21:17:29.026678] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:27.711 [2024-12-05 21:17:29.026716] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:27.711 [2024-12-05 21:17:29.027063] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:27.711 [2024-12-05 21:17:29.027121] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:27.711 [2024-12-05 21:17:29.027488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-12-05 21:17:29.027505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa51960 with addr=10.0.0.2, port=4420 00:24:27.711 [2024-12-05 21:17:29.027513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa51960 is same with the state(6) to be set 00:24:27.711 [2024-12-05 21:17:29.027526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe90b50 (9): Bad file descriptor 00:24:27.711 [2024-12-05 21:17:29.027537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed6cc0 (9): Bad file descriptor 00:24:27.711 [2024-12-05 21:17:29.027641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa51960 (9): Bad file descriptor 00:24:27.711 [2024-12-05 21:17:29.027655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:27.711 [2024-12-05 21:17:29.027664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:27.711 [2024-12-05 21:17:29.027673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:27.711 [2024-12-05 21:17:29.027684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:27.711 [2024-12-05 21:17:29.027693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:24:27.711 [2024-12-05 21:17:29.027701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:24:27.711 [2024-12-05 21:17:29.027709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:24:27.711 [2024-12-05 21:17:29.027716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:24:27.711 [2024-12-05 21:17:29.027759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:24:27.711 [2024-12-05 21:17:29.027768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:24:27.711 [2024-12-05 21:17:29.027776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:24:27.711 [2024-12-05 21:17:29.027784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:24:27.711 [2024-12-05 21:17:29.028891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe91dd0 (9): Bad file descriptor 00:24:27.711 [2024-12-05 21:17:29.029032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.029046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.029062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.029071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.029081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.029090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.029100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.029109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.029123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.029131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.029141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.029149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.029160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.029168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.029178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.029186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.029196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.029204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.029215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.711 [2024-12-05 21:17:29.029223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.711 [2024-12-05 21:17:29.029233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.712 [2024-12-05 21:17:29.029881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.712 [2024-12-05 21:17:29.029891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.029899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.029909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.029917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.029928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.029935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.029945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.029953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.029963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.029971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.029981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.029989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.029999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.030007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.030017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.030025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.030036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.030045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.030057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.030065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.030075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.030083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.030092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.030101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.030111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.030119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.030129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.030138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.030147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6d800 is same with the state(6) to be set 00:24:27.713 [2024-12-05 21:17:29.031407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.713 [2024-12-05 21:17:29.031876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.713 [2024-12-05 21:17:29.031886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.031894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.031905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.031912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.031923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.031931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.031940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.031948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.031958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.031967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.031977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.031985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.031995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.714 [2024-12-05 21:17:29.032601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.714 [2024-12-05 21:17:29.032610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6e910 is same with the state(6) to be set 00:24:27.714 [2024-12-05 21:17:29.033889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.033902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.033916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.033925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.033938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.033947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.033959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.033969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.033981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.033991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.715 [2024-12-05 21:17:29.034644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.715 [2024-12-05 21:17:29.034654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.034990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.034998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.035008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.035016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.035027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.035034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.035045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.035052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.035063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.035072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.035083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.035091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.035100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3d9a0 is same with the state(6) to be set 00:24:27.716 [2024-12-05 21:17:29.036380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.036395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.036406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.036414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.036424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.036431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.036441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.036451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.036461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.036469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.036478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.036486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.036495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.036503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.036513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.036521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.036530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.036537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.036547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.036555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.036565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.716 [2024-12-05 21:17:29.036573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.716 [2024-12-05 21:17:29.036582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.036982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.036992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.717 [2024-12-05 21:17:29.037308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.717 [2024-12-05 21:17:29.037315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.037327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.037334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.037345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.037352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.037363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.037373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.037384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.037393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.037403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.037411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.037421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.037429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.037439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.037447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.037457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.037465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.037475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.037483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.037493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.037501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.037511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.037519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.037529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.037537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.037547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.037555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.037564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6aaf0 is same with the state(6) to be set 00:24:27.718 [2024-12-05 21:17:29.038832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.038848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.038867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.038880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.038893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.038903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.038916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.038926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.038938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.038948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.038960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.038970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.038982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.038991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.718 [2024-12-05 21:17:29.039342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.718 [2024-12-05 21:17:29.039353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.039983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.039991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.040001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.040009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.040020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.040028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.040038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.040046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.040055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe6e420 is same with the state(6) to be set 00:24:27.719 [2024-12-05 21:17:29.041339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.719 [2024-12-05 21:17:29.041356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.719 [2024-12-05 21:17:29.041370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.041987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.041995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.042005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.042013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.042024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.042031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.042041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.042051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.042062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.042070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.042080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.042088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.042098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.720 [2024-12-05 21:17:29.042106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.720 [2024-12-05 21:17:29.042116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.042536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.042546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaa170 is same with the state(6) to be set 00:24:27.721 [2024-12-05 21:17:29.043797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:27.721 [2024-12-05 21:17:29.043819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:27.721 [2024-12-05 21:17:29.043833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:27.721 [2024-12-05 21:17:29.043847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:27.721 [2024-12-05 21:17:29.043934] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:24:27.721 [2024-12-05 21:17:29.043960] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:24:27.721 [2024-12-05 21:17:29.044041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:24:27.721 [2024-12-05 21:17:29.044056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:27.721 [2024-12-05 21:17:29.044516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-12-05 21:17:29.044534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa53b10 with addr=10.0.0.2, port=4420 00:24:27.721 [2024-12-05 21:17:29.044543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa53b10 is same with the state(6) to be set 00:24:27.721 [2024-12-05 21:17:29.044768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-12-05 21:17:29.044780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa60430 with addr=10.0.0.2, port=4420 00:24:27.721 [2024-12-05 21:17:29.044787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60430 is same with the state(6) to be set 00:24:27.721 [2024-12-05 21:17:29.045109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-12-05 21:17:29.045122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa625e0 with addr=10.0.0.2, port=4420 00:24:27.721 [2024-12-05 21:17:29.045130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa625e0 is same with the state(6) to be set 00:24:27.721 [2024-12-05 21:17:29.045309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.721 [2024-12-05 21:17:29.045321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa60230 with addr=10.0.0.2, port=4420 00:24:27.721 [2024-12-05 21:17:29.045329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60230 is same with the state(6) to be set 00:24:27.721 [2024-12-05 21:17:29.046676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.046690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.046701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.721 [2024-12-05 21:17:29.046709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.721 [2024-12-05 21:17:29.046724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.046732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.046742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.046749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.046758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.046766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.046777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.046784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.046794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.046801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.046810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.046819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.046829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.046836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.046846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.046854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.046867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.046875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.046886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.046893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.046905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.046912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.046923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.046931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.046941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.046951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.046961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.046968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.046979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.046986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.046996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.722 [2024-12-05 21:17:29.047437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.722 [2024-12-05 21:17:29.047444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.723 [2024-12-05 21:17:29.047842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.723 [2024-12-05 21:17:29.047850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe709f0 is same with the state(6) to be set 00:24:27.723 [2024-12-05 21:17:29.049642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:24:27.723 [2024-12-05 21:17:29.049669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:27.723 [2024-12-05 21:17:29.049679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:24:27.723 task offset: 25728 on job bdev=Nvme6n1 fails 00:24:27.723 00:24:27.723 Latency(us) 00:24:27.723 [2024-12-05T20:17:29.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.723 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.723 Job: Nvme1n1 ended in about 0.97 seconds with error 00:24:27.723 Verification LBA range: start 0x0 length 0x400 00:24:27.723 Nvme1n1 : 0.97 136.75 8.55 60.67 0.00 320550.40 16820.91 256901.12 00:24:27.723 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.723 Job: Nvme2n1 ended in about 0.98 seconds with error 00:24:27.723 Verification LBA range: start 0x0 length 0x400 00:24:27.723 Nvme2n1 : 0.98 196.92 12.31 65.64 0.00 236197.97 19988.48 241172.48 00:24:27.723 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.723 Job: Nvme3n1 ended in about 0.98 seconds with error 00:24:27.723 Verification LBA range: start 0x0 length 0x400 00:24:27.723 Nvme3n1 : 0.98 196.42 12.28 65.47 0.00 231964.16 17694.72 246415.36 00:24:27.723 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.723 Job: Nvme4n1 ended in about 0.98 seconds with error 00:24:27.723 Verification LBA range: start 0x0 length 0x400 00:24:27.723 Nvme4n1 : 0.98 195.92 12.25 65.31 0.00 227748.27 18786.99 244667.73 00:24:27.723 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.723 Job: Nvme5n1 ended in about 0.97 seconds with error 00:24:27.723 Verification LBA range: start 0x0 length 0x400 00:24:27.723 Nvme5n1 : 0.97 198.83 12.43 66.28 0.00 219255.68 14964.05 248162.99 00:24:27.723 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.723 Job: Nvme6n1 ended in about 0.96 seconds with error 00:24:27.723 Verification LBA range: start 0x0 length 0x400 00:24:27.723 Nvme6n1 : 0.96 199.38 12.46 66.46 0.00 213751.57 12288.00 221074.77 00:24:27.723 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.723 Job: Nvme7n1 ended in about 0.98 seconds with error 00:24:27.723 Verification LBA range: start 0x0 length 0x400 00:24:27.723 Nvme7n1 : 0.98 195.43 12.21 65.14 0.00 213800.11 29491.20 234181.97 00:24:27.723 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.723 Job: Nvme8n1 ended in about 0.96 seconds with error 00:24:27.723 Verification LBA range: start 0x0 length 0x400 00:24:27.723 Nvme8n1 : 0.96 199.13 12.45 66.38 0.00 204365.65 14854.83 246415.36 00:24:27.723 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.723 Job: Nvme9n1 ended in about 0.99 seconds with error 00:24:27.723 Verification LBA range: start 0x0 length 0x400 00:24:27.723 Nvme9n1 : 0.99 129.26 8.08 64.63 0.00 274781.01 19223.89 249910.61 00:24:27.723 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.723 Job: Nvme10n1 ended in about 0.98 seconds with error 00:24:27.723 Verification LBA range: start 0x0 length 0x400 00:24:27.723 Nvme10n1 : 0.98 129.96 8.12 64.98 0.00 266651.31 15947.09 269134.51 00:24:27.723 [2024-12-05T20:17:29.160Z] =================================================================================================================== 00:24:27.723 [2024-12-05T20:17:29.160Z] Total : 1778.00 111.12 650.95 0.00 237142.75 12288.00 269134.51 00:24:27.723 [2024-12-05 21:17:29.073419] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:27.723 [2024-12-05 21:17:29.073449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:24:27.723 [2024-12-05 21:17:29.073762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-12-05 21:17:29.073786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97f610 with addr=10.0.0.2, port=4420 00:24:27.723 [2024-12-05 21:17:29.073795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97f610 is same with the state(6) to be set 00:24:27.723 [2024-12-05 21:17:29.074134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.723 [2024-12-05 21:17:29.074147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed72f0 with addr=10.0.0.2, port=4420 00:24:27.723 [2024-12-05 21:17:29.074154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed72f0 is same with the state(6) to be set 00:24:27.724 [2024-12-05 21:17:29.074166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa53b10 (9): Bad file descriptor 00:24:27.724 [2024-12-05 21:17:29.074178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa60430 (9): Bad file descriptor 00:24:27.724 [2024-12-05 21:17:29.074188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa625e0 (9): Bad file descriptor 00:24:27.724 [2024-12-05 21:17:29.074197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa60230 (9): Bad file descriptor 00:24:27.724 [2024-12-05 21:17:29.074657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-12-05 21:17:29.074672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed6cc0 with addr=10.0.0.2, port=4420 00:24:27.724 [2024-12-05 21:17:29.074680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed6cc0 is same with the state(6) to be set 00:24:27.724 [2024-12-05 21:17:29.074878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-12-05 21:17:29.074891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe90b50 with addr=10.0.0.2, port=4420 00:24:27.724 [2024-12-05 21:17:29.074898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe90b50 is same with the state(6) to be set 00:24:27.724 [2024-12-05 21:17:29.075195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-12-05 21:17:29.075206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa51960 with addr=10.0.0.2, port=4420 00:24:27.724 [2024-12-05 21:17:29.075214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa51960 is same with the state(6) to be set 00:24:27.724 [2024-12-05 21:17:29.075609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-12-05 21:17:29.075620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe91dd0 with addr=10.0.0.2, port=4420 00:24:27.724 [2024-12-05 21:17:29.075628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe91dd0 is same with the state(6) to be set 00:24:27.724 [2024-12-05 21:17:29.075637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97f610 (9): Bad file descriptor 00:24:27.724 [2024-12-05 21:17:29.075647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed72f0 (9): Bad file descriptor 00:24:27.724 [2024-12-05 21:17:29.075656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:27.724 [2024-12-05 21:17:29.075663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:27.724 [2024-12-05 21:17:29.075671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:27.724 [2024-12-05 21:17:29.075681] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:27.724 [2024-12-05 21:17:29.075690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:27.724 [2024-12-05 21:17:29.075697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:27.724 [2024-12-05 21:17:29.075707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:27.724 [2024-12-05 21:17:29.075714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:27.724 [2024-12-05 21:17:29.075722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:27.724 [2024-12-05 21:17:29.075729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:27.724 [2024-12-05 21:17:29.075736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:27.724 [2024-12-05 21:17:29.075742] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:27.724 [2024-12-05 21:17:29.075750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:27.724 [2024-12-05 21:17:29.075756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:27.724 [2024-12-05 21:17:29.075763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:27.724 [2024-12-05 21:17:29.075769] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:27.724 [2024-12-05 21:17:29.075817] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:24:27.724 [2024-12-05 21:17:29.075831] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:24:27.724 [2024-12-05 21:17:29.076203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed6cc0 (9): Bad file descriptor 00:24:27.724 [2024-12-05 21:17:29.076218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe90b50 (9): Bad file descriptor 00:24:27.724 [2024-12-05 21:17:29.076228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa51960 (9): Bad file descriptor 00:24:27.724 [2024-12-05 21:17:29.076238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe91dd0 (9): Bad file descriptor 00:24:27.724 [2024-12-05 21:17:29.076246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:24:27.724 [2024-12-05 21:17:29.076254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:24:27.724 [2024-12-05 21:17:29.076262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:24:27.724 [2024-12-05 21:17:29.076268] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:24:27.724 [2024-12-05 21:17:29.076276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:27.724 [2024-12-05 21:17:29.076282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:27.724 [2024-12-05 21:17:29.076290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:27.724 [2024-12-05 21:17:29.076297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:27.724 [2024-12-05 21:17:29.076336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:27.724 [2024-12-05 21:17:29.076348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:27.724 [2024-12-05 21:17:29.076357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:27.724 [2024-12-05 21:17:29.076367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:27.724 [2024-12-05 21:17:29.076398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:24:27.724 [2024-12-05 21:17:29.076410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:24:27.724 [2024-12-05 21:17:29.076417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:24:27.724 [2024-12-05 21:17:29.076425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:24:27.724 [2024-12-05 21:17:29.076432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:27.724 [2024-12-05 21:17:29.076439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:27.724 [2024-12-05 21:17:29.076447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:27.724 [2024-12-05 21:17:29.076453] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:27.724 [2024-12-05 21:17:29.076461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:24:27.724 [2024-12-05 21:17:29.076467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:24:27.724 [2024-12-05 21:17:29.076476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:24:27.724 [2024-12-05 21:17:29.076483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:24:27.724 [2024-12-05 21:17:29.076491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:24:27.724 [2024-12-05 21:17:29.076497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:24:27.724 [2024-12-05 21:17:29.076504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:24:27.724 [2024-12-05 21:17:29.076512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:24:27.724 [2024-12-05 21:17:29.076867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-12-05 21:17:29.076882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa60230 with addr=10.0.0.2, port=4420 00:24:27.724 [2024-12-05 21:17:29.076891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60230 is same with the state(6) to be set 00:24:27.724 [2024-12-05 21:17:29.077224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-12-05 21:17:29.077235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa625e0 with addr=10.0.0.2, port=4420 00:24:27.724 [2024-12-05 21:17:29.077242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa625e0 is same with the state(6) to be set 00:24:27.724 [2024-12-05 21:17:29.077439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-12-05 21:17:29.077450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa60430 with addr=10.0.0.2, port=4420 00:24:27.724 [2024-12-05 21:17:29.077457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60430 is same with the state(6) to be set 00:24:27.724 [2024-12-05 21:17:29.077749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.724 [2024-12-05 21:17:29.077759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa53b10 with addr=10.0.0.2, port=4420 00:24:27.724 [2024-12-05 21:17:29.077768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa53b10 is same with the state(6) to be set 00:24:27.724 [2024-12-05 21:17:29.077798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa60230 (9): Bad file descriptor 00:24:27.724 [2024-12-05 21:17:29.077810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa625e0 (9): Bad file descriptor 00:24:27.724 [2024-12-05 21:17:29.077822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa60430 (9): Bad file descriptor 00:24:27.724 [2024-12-05 21:17:29.077832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa53b10 (9): Bad file descriptor 00:24:27.724 [2024-12-05 21:17:29.077860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:27.724 [2024-12-05 21:17:29.077897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:27.724 [2024-12-05 21:17:29.077905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:27.724 [2024-12-05 21:17:29.077912] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:27.724 [2024-12-05 21:17:29.077919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:27.725 [2024-12-05 21:17:29.077926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:27.725 [2024-12-05 21:17:29.077934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:27.725 [2024-12-05 21:17:29.077941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:27.725 [2024-12-05 21:17:29.077948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:27.725 [2024-12-05 21:17:29.077955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:27.725 [2024-12-05 21:17:29.077962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:27.725 [2024-12-05 21:17:29.077970] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:27.725 [2024-12-05 21:17:29.077978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:27.725 [2024-12-05 21:17:29.077984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:27.725 [2024-12-05 21:17:29.077991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:27.725 [2024-12-05 21:17:29.077998] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:27.985 21:17:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2171060 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2171060 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2171060 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.926 rmmod nvme_tcp 00:24:28.926 rmmod nvme_fabrics 00:24:28.926 rmmod nvme_keyring 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2170928 ']' 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2170928 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2170928 ']' 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2170928 00:24:28.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2170928) - No such process 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2170928 is not found' 00:24:28.926 Process with pid 2170928 is not found 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.926 21:17:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.470 00:24:31.470 real 0m7.265s 00:24:31.470 user 0m16.975s 00:24:31.470 sys 0m1.218s 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:31.470 ************************************ 00:24:31.470 END TEST nvmf_shutdown_tc3 00:24:31.470 ************************************ 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:31.470 ************************************ 00:24:31.470 START TEST nvmf_shutdown_tc4 00:24:31.470 ************************************ 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:31.470 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:31.470 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:31.470 Found net devices under 0000:31:00.0: cvl_0_0 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.470 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:31.471 Found net devices under 0000:31:00.1: cvl_0_1 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:31.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:24:31.471 00:24:31.471 --- 10.0.0.2 ping statistics --- 00:24:31.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.471 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:24:31.471 00:24:31.471 --- 10.0.0.1 ping statistics --- 00:24:31.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.471 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2172445 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2172445 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2172445 ']' 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.471 21:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:31.732 [2024-12-05 21:17:32.950507] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:24:31.732 [2024-12-05 21:17:32.950557] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.732 [2024-12-05 21:17:33.048280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:31.732 [2024-12-05 21:17:33.078246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.732 [2024-12-05 21:17:33.078274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.732 [2024-12-05 21:17:33.078280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.732 [2024-12-05 21:17:33.078285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.732 [2024-12-05 21:17:33.078290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.732 [2024-12-05 21:17:33.079504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.732 [2024-12-05 21:17:33.079660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.732 [2024-12-05 21:17:33.079813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.732 [2024-12-05 21:17:33.079815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:31.732 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.732 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:24:31.732 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.732 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.732 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:31.993 [2024-12-05 21:17:33.199884] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.993 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:31.993 Malloc1 00:24:31.993 [2024-12-05 21:17:33.312820] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.993 Malloc2 00:24:31.993 Malloc3 00:24:31.993 Malloc4 00:24:32.253 Malloc5 00:24:32.253 Malloc6 00:24:32.253 Malloc7 00:24:32.253 Malloc8 00:24:32.253 Malloc9 00:24:32.253 Malloc10 00:24:32.253 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.253 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:32.253 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:32.253 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:32.513 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2172713 00:24:32.513 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:24:32.513 21:17:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:24:32.514 [2024-12-05 21:17:33.780782] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:37.804 21:17:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:37.804 21:17:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2172445 00:24:37.804 21:17:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2172445 ']' 00:24:37.804 21:17:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2172445 00:24:37.804 21:17:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:24:37.804 21:17:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:37.804 21:17:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2172445 00:24:37.804 21:17:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:37.804 21:17:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:37.804 21:17:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2172445' 00:24:37.804 killing process with pid 2172445 00:24:37.804 21:17:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2172445 00:24:37.804 21:17:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2172445 00:24:37.804 [2024-12-05 21:17:38.791088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fffa0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fffa0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fffa0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fffa0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fffa0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fffa0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200470 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200470 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200470 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200470 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200470 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200940 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200940 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200940 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200940 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200940 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200940 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.791893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200940 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.792310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ffad0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.792332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ffad0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.792339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ffad0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.792345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ffad0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.792350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ffad0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.792355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ffad0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12012e0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12012e0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12012e0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12012e0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12012e0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12012e0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12012e0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12012e0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12012e0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12017b0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12017b0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12017b0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12017b0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12017b0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12017b0 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1201c80 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1201c80 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1201c80 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1201c80 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1201c80 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1201c80 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200e10 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200e10 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200e10 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200e10 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200e10 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.793921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1200e10 is same with the state(6) to be set 00:24:37.804 [2024-12-05 21:17:38.795104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248260 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.795119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248260 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.795125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248260 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.795129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248260 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.795134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248260 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.795143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248260 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.795147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248260 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.795152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248260 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.795157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248260 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.795161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248260 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.795166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248260 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.795170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248260 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.795175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248260 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.795180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248260 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.795184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248260 is same with the state(6) to be set 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 [2024-12-05 21:17:38.795987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 [2024-12-05 21:17:38.796237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202620 is same with the state(6) to be set 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 [2024-12-05 21:17:38.796253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202620 is same with the state(6) to be set 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 [2024-12-05 21:17:38.796574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202af0 is same with the state(6) to be set 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 [2024-12-05 21:17:38.796589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202af0 is same with the state(6) to be set 00:24:37.805 starting I/O failed: -6 00:24:37.805 [2024-12-05 21:17:38.796595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202af0 is same with the state(6) to be set 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 [2024-12-05 21:17:38.796600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202af0 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.796606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202af0 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.796611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202af0 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.796615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202af0 is same with the state(6) to be set 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 [2024-12-05 21:17:38.796620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202af0 is same with the state(6) to be set 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 [2024-12-05 21:17:38.796796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202fc0 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.796811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202fc0 is same with the state(6) to be set 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 [2024-12-05 21:17:38.796817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202fc0 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.796822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202fc0 is same with the state(6) to be set 00:24:37.805 [2024-12-05 21:17:38.796827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202fc0 is same with the state(6) to be set 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 [2024-12-05 21:17:38.796856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 starting I/O failed: -6 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 [2024-12-05 21:17:38.797238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202150 is same with the state(6) to be set 00:24:37.805 Write completed with error (sct=0, sc=8) 00:24:37.805 [2024-12-05 21:17:38.797252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202150 is same with the state(6) to be set 00:24:37.805 starting I/O failed: -6 00:24:37.806 [2024-12-05 21:17:38.797258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202150 is same with the state(6) to be set 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 [2024-12-05 21:17:38.797263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1202150 is same with the state(6) to be set 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 [2024-12-05 21:17:38.797837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:37.806 NVMe io qpair process completion error 00:24:37.806 [2024-12-05 21:17:38.798678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1249f80 is same with the state(6) to be set 00:24:37.806 [2024-12-05 21:17:38.798695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1249f80 is same with the state(6) to be set 00:24:37.806 [2024-12-05 21:17:38.798700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1249f80 is same with the state(6) to be set 00:24:37.806 [2024-12-05 21:17:38.798716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1249f80 is same with the state(6) to be set 00:24:37.806 [2024-12-05 21:17:38.798721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1249f80 is same with the state(6) to be set 00:24:37.806 [2024-12-05 21:17:38.798725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1249f80 is same with the state(6) to be set 00:24:37.806 [2024-12-05 21:17:38.798730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1249f80 is same with the state(6) to be set 00:24:37.806 [2024-12-05 21:17:38.798973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124a470 is same with the state(6) to be set 00:24:37.806 [2024-12-05 21:17:38.798989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124a470 is same with the state(6) to be set 00:24:37.806 [2024-12-05 21:17:38.799197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12495c0 is same with the state(6) to be set 00:24:37.806 [2024-12-05 21:17:38.799212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12495c0 is same with the state(6) to be set 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 [2024-12-05 21:17:38.801202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:37.806 starting I/O failed: -6 00:24:37.806 starting I/O failed: -6 00:24:37.806 starting I/O failed: -6 00:24:37.806 starting I/O failed: -6 00:24:37.806 starting I/O failed: -6 00:24:37.806 starting I/O failed: -6 00:24:37.806 starting I/O failed: -6 00:24:37.806 starting I/O failed: -6 00:24:37.806 NVMe io qpair process completion error 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 starting I/O failed: -6 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.806 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 [2024-12-05 21:17:38.803513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 [2024-12-05 21:17:38.804430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 [2024-12-05 21:17:38.805357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.807 Write completed with error (sct=0, sc=8) 00:24:37.807 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 [2024-12-05 21:17:38.806844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:37.808 NVMe io qpair process completion error 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 [2024-12-05 21:17:38.807997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.808 starting I/O failed: -6 00:24:37.808 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 [2024-12-05 21:17:38.808804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 [2024-12-05 21:17:38.809727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.809 Write completed with error (sct=0, sc=8) 00:24:37.809 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 [2024-12-05 21:17:38.812246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:37.810 NVMe io qpair process completion error 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 [2024-12-05 21:17:38.813485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:37.810 starting I/O failed: -6 00:24:37.810 starting I/O failed: -6 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 [2024-12-05 21:17:38.814477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 Write completed with error (sct=0, sc=8) 00:24:37.810 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 [2024-12-05 21:17:38.815397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 [2024-12-05 21:17:38.817064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:37.811 NVMe io qpair process completion error 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 [2024-12-05 21:17:38.818135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.811 starting I/O failed: -6 00:24:37.811 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 [2024-12-05 21:17:38.819046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 [2024-12-05 21:17:38.819975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.812 Write completed with error (sct=0, sc=8) 00:24:37.812 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 [2024-12-05 21:17:38.823018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:37.813 NVMe io qpair process completion error 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 [2024-12-05 21:17:38.824140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 [2024-12-05 21:17:38.824972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.813 starting I/O failed: -6 00:24:37.813 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 [2024-12-05 21:17:38.825919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 [2024-12-05 21:17:38.827380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:37.814 NVMe io qpair process completion error 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 [2024-12-05 21:17:38.828578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.814 starting I/O failed: -6 00:24:37.814 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 [2024-12-05 21:17:38.829411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 [2024-12-05 21:17:38.830349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.815 starting I/O failed: -6 00:24:37.815 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 [2024-12-05 21:17:38.833315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:37.816 NVMe io qpair process completion error 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 [2024-12-05 21:17:38.834577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 [2024-12-05 21:17:38.835393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.816 Write completed with error (sct=0, sc=8) 00:24:37.816 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 [2024-12-05 21:17:38.836336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 [2024-12-05 21:17:38.838271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:37.817 NVMe io qpair process completion error 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 [2024-12-05 21:17:38.839355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.817 starting I/O failed: -6 00:24:37.817 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 [2024-12-05 21:17:38.840187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 [2024-12-05 21:17:38.841145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.818 Write completed with error (sct=0, sc=8) 00:24:37.818 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 starting I/O failed: -6 00:24:37.819 [2024-12-05 21:17:38.843030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:37.819 NVMe io qpair process completion error 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Write completed with error (sct=0, sc=8) 00:24:37.819 Initializing NVMe Controllers 00:24:37.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:37.819 Controller IO queue size 128, less than required. 00:24:37.819 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:37.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:37.819 Controller IO queue size 128, less than required. 00:24:37.819 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:37.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:37.819 Controller IO queue size 128, less than required. 00:24:37.819 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:37.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:37.819 Controller IO queue size 128, less than required. 00:24:37.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:37.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:37.820 Controller IO queue size 128, less than required. 00:24:37.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:37.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:37.820 Controller IO queue size 128, less than required. 00:24:37.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:37.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:37.820 Controller IO queue size 128, less than required. 00:24:37.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:37.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:37.820 Controller IO queue size 128, less than required. 00:24:37.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:37.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:37.820 Controller IO queue size 128, less than required. 00:24:37.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:37.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:37.820 Controller IO queue size 128, less than required. 00:24:37.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:37.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:37.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:37.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:37.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:37.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:37.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:37.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:37.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:37.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:37.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:37.820 Initialization complete. Launching workers. 00:24:37.820 ======================================================== 00:24:37.820 Latency(us) 00:24:37.820 Device Information : IOPS MiB/s Average min max 00:24:37.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1867.54 80.25 68861.52 862.09 119098.01 00:24:37.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1858.37 79.85 69356.30 594.21 129892.03 00:24:37.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1851.29 79.55 69181.53 917.06 127623.67 00:24:37.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1852.54 79.60 69154.63 824.72 129420.18 00:24:37.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1859.00 79.88 68950.64 690.17 124517.14 00:24:37.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1792.76 77.03 71519.94 944.87 128771.46 00:24:37.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1885.45 81.02 68044.37 832.43 119958.57 00:24:37.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1879.82 80.77 68267.21 688.89 129613.60 00:24:37.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1912.73 82.19 67132.52 816.64 133115.62 00:24:37.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1875.45 80.59 68491.81 728.91 120522.95 00:24:37.820 ======================================================== 00:24:37.820 Total : 18634.95 800.72 68879.13 594.21 133115.62 00:24:37.820 00:24:37.820 [2024-12-05 21:17:38.852160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f606c0 is same with the state(6) to be set 00:24:37.820 [2024-12-05 21:17:38.852207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f616b0 is same with the state(6) to be set 00:24:37.820 [2024-12-05 21:17:38.852238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f609f0 is same with the state(6) to be set 00:24:37.820 [2024-12-05 21:17:38.852267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61380 is same with the state(6) to be set 00:24:37.820 [2024-12-05 21:17:38.852296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62360 is same with the state(6) to be set 00:24:37.820 [2024-12-05 21:17:38.852325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62540 is same with the state(6) to be set 00:24:37.820 [2024-12-05 21:17:38.852356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f619e0 is same with the state(6) to be set 00:24:37.820 [2024-12-05 21:17:38.852385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60060 is same with the state(6) to be set 00:24:37.820 [2024-12-05 21:17:38.852415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f60390 is same with the state(6) to be set 00:24:37.820 [2024-12-05 21:17:38.852443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f61050 is same with the state(6) to be set 00:24:37.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:37.820 21:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:38.760 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2172713 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2172713 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2172713 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.761 rmmod nvme_tcp 00:24:38.761 rmmod nvme_fabrics 00:24:38.761 rmmod nvme_keyring 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2172445 ']' 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2172445 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2172445 ']' 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2172445 00:24:38.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2172445) - No such process 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2172445 is not found' 00:24:38.761 Process with pid 2172445 is not found 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.761 21:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.304 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:41.304 00:24:41.304 real 0m9.687s 00:24:41.304 user 0m25.545s 00:24:41.304 sys 0m3.957s 00:24:41.304 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.304 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:41.304 ************************************ 00:24:41.304 END TEST nvmf_shutdown_tc4 00:24:41.304 ************************************ 00:24:41.304 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:41.304 00:24:41.304 real 0m43.514s 00:24:41.304 user 1m41.579s 00:24:41.304 sys 0m14.454s 00:24:41.304 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.304 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:41.304 ************************************ 00:24:41.304 END TEST nvmf_shutdown 00:24:41.304 ************************************ 00:24:41.304 21:17:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:41.304 21:17:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:41.304 21:17:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.304 21:17:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:41.305 ************************************ 00:24:41.305 START TEST nvmf_nsid 00:24:41.305 ************************************ 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:41.305 * Looking for test storage... 00:24:41.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.305 --rc genhtml_branch_coverage=1 00:24:41.305 --rc genhtml_function_coverage=1 00:24:41.305 --rc genhtml_legend=1 00:24:41.305 --rc geninfo_all_blocks=1 00:24:41.305 --rc geninfo_unexecuted_blocks=1 00:24:41.305 00:24:41.305 ' 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.305 --rc genhtml_branch_coverage=1 00:24:41.305 --rc genhtml_function_coverage=1 00:24:41.305 --rc genhtml_legend=1 00:24:41.305 --rc geninfo_all_blocks=1 00:24:41.305 --rc geninfo_unexecuted_blocks=1 00:24:41.305 00:24:41.305 ' 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.305 --rc genhtml_branch_coverage=1 00:24:41.305 --rc genhtml_function_coverage=1 00:24:41.305 --rc genhtml_legend=1 00:24:41.305 --rc geninfo_all_blocks=1 00:24:41.305 --rc geninfo_unexecuted_blocks=1 00:24:41.305 00:24:41.305 ' 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:41.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.305 --rc genhtml_branch_coverage=1 00:24:41.305 --rc genhtml_function_coverage=1 00:24:41.305 --rc genhtml_legend=1 00:24:41.305 --rc geninfo_all_blocks=1 00:24:41.305 --rc geninfo_unexecuted_blocks=1 00:24:41.305 00:24:41.305 ' 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:41.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:41.305 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:41.306 21:17:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:49.442 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:49.442 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:49.442 Found net devices under 0000:31:00.0: cvl_0_0 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.442 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:49.443 Found net devices under 0000:31:00.1: cvl_0_1 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:49.443 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:49.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:24:49.703 00:24:49.703 --- 10.0.0.2 ping statistics --- 00:24:49.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.703 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:24:49.703 00:24:49.703 --- 10.0.0.1 ping statistics --- 00:24:49.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.703 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2178542 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2178542 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2178542 ']' 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:49.703 21:17:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:49.703 [2024-12-05 21:17:51.052920] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:24:49.703 [2024-12-05 21:17:51.053013] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.963 [2024-12-05 21:17:51.144105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.963 [2024-12-05 21:17:51.184801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.963 [2024-12-05 21:17:51.184839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.963 [2024-12-05 21:17:51.184848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.963 [2024-12-05 21:17:51.184855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.963 [2024-12-05 21:17:51.184867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.963 [2024-12-05 21:17:51.185480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2178870 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=6d90a845-8db8-40b2-abb2-15974f261387 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=426ec00e-2e3f-4c77-a27e-4c9b1ebcc5b8 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=2833fedb-4e0b-4cc4-98f2-25dac47d7008 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.533 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:50.533 null0 00:24:50.533 null1 00:24:50.533 [2024-12-05 21:17:51.912608] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:24:50.533 [2024-12-05 21:17:51.912662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178870 ] 00:24:50.533 null2 00:24:50.533 [2024-12-05 21:17:51.920602] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.533 [2024-12-05 21:17:51.944778] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.793 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.794 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2178870 /var/tmp/tgt2.sock 00:24:50.794 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2178870 ']' 00:24:50.794 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:50.794 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.794 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:50.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:50.794 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.794 21:17:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:50.794 [2024-12-05 21:17:52.009381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.794 [2024-12-05 21:17:52.045143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.053 21:17:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.053 21:17:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:51.053 21:17:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:51.313 [2024-12-05 21:17:52.531753] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.313 [2024-12-05 21:17:52.547885] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:51.313 nvme0n1 nvme0n2 00:24:51.313 nvme1n1 00:24:51.313 21:17:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:51.313 21:17:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:51.313 21:17:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:52.698 21:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:52.698 21:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:52.698 21:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:52.698 21:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:52.698 21:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:52.698 21:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:52.698 21:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:52.698 21:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:52.698 21:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:52.698 21:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:52.698 21:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:52.698 21:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:52.698 21:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:53.640 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:53.640 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:53.640 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:53.640 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:53.640 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:53.640 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 6d90a845-8db8-40b2-abb2-15974f261387 00:24:53.640 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:53.640 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:53.640 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:53.640 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:53.640 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:53.900 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6d90a8458db840b2abb215974f261387 00:24:53.900 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6D90A8458DB840B2ABB215974F261387 00:24:53.900 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 6D90A8458DB840B2ABB215974F261387 == \6\D\9\0\A\8\4\5\8\D\B\8\4\0\B\2\A\B\B\2\1\5\9\7\4\F\2\6\1\3\8\7 ]] 00:24:53.900 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 426ec00e-2e3f-4c77-a27e-4c9b1ebcc5b8 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=426ec00e2e3f4c77a27e4c9b1ebcc5b8 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 426EC00E2E3F4C77A27E4C9B1EBCC5B8 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 426EC00E2E3F4C77A27E4C9B1EBCC5B8 == \4\2\6\E\C\0\0\E\2\E\3\F\4\C\7\7\A\2\7\E\4\C\9\B\1\E\B\C\C\5\B\8 ]] 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 2833fedb-4e0b-4cc4-98f2-25dac47d7008 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2833fedb4e0b4cc498f225dac47d7008 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2833FEDB4E0B4CC498F225DAC47D7008 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 2833FEDB4E0B4CC498F225DAC47D7008 == \2\8\3\3\F\E\D\B\4\E\0\B\4\C\C\4\9\8\F\2\2\5\D\A\C\4\7\D\7\0\0\8 ]] 00:24:53.901 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:54.161 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:54.161 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:54.161 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2178870 00:24:54.161 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2178870 ']' 00:24:54.161 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2178870 00:24:54.161 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:54.161 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.161 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2178870 00:24:54.161 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:54.161 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:54.161 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2178870' 00:24:54.161 killing process with pid 2178870 00:24:54.161 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2178870 00:24:54.161 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2178870 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:54.422 rmmod nvme_tcp 00:24:54.422 rmmod nvme_fabrics 00:24:54.422 rmmod nvme_keyring 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2178542 ']' 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2178542 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2178542 ']' 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2178542 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.422 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2178542 00:24:54.683 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:54.683 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:54.683 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2178542' 00:24:54.683 killing process with pid 2178542 00:24:54.683 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2178542 00:24:54.683 21:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2178542 00:24:54.683 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:54.683 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:54.683 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:54.683 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:54.683 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:54.683 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:54.683 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:54.683 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:54.683 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:54.683 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.683 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.683 21:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.679 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:56.679 00:24:56.679 real 0m15.743s 00:24:56.679 user 0m11.494s 00:24:56.679 sys 0m7.416s 00:24:56.679 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:56.679 21:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:56.679 ************************************ 00:24:56.679 END TEST nvmf_nsid 00:24:56.679 ************************************ 00:24:56.956 21:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:56.956 00:24:56.956 real 13m24.305s 00:24:56.956 user 27m22.922s 00:24:56.956 sys 4m10.296s 00:24:56.956 21:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:56.956 21:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:56.956 ************************************ 00:24:56.956 END TEST nvmf_target_extra 00:24:56.956 ************************************ 00:24:56.956 21:17:58 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:56.956 21:17:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:56.956 21:17:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:56.956 21:17:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:56.956 ************************************ 00:24:56.956 START TEST nvmf_host 00:24:56.956 ************************************ 00:24:56.956 21:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:56.956 * Looking for test storage... 00:24:56.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:56.956 21:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:56.956 21:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:56.956 21:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:56.956 21:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:56.956 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:56.957 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:56.957 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:56.957 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:56.957 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:56.957 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:56.957 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:56.957 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:56.957 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:56.957 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:56.957 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:56.957 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:56.957 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:56.957 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:56.957 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:57.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.256 --rc genhtml_branch_coverage=1 00:24:57.256 --rc genhtml_function_coverage=1 00:24:57.256 --rc genhtml_legend=1 00:24:57.256 --rc geninfo_all_blocks=1 00:24:57.256 --rc geninfo_unexecuted_blocks=1 00:24:57.256 00:24:57.256 ' 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:57.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.256 --rc genhtml_branch_coverage=1 00:24:57.256 --rc genhtml_function_coverage=1 00:24:57.256 --rc genhtml_legend=1 00:24:57.256 --rc geninfo_all_blocks=1 00:24:57.256 --rc geninfo_unexecuted_blocks=1 00:24:57.256 00:24:57.256 ' 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:57.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.256 --rc genhtml_branch_coverage=1 00:24:57.256 --rc genhtml_function_coverage=1 00:24:57.256 --rc genhtml_legend=1 00:24:57.256 --rc geninfo_all_blocks=1 00:24:57.256 --rc geninfo_unexecuted_blocks=1 00:24:57.256 00:24:57.256 ' 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:57.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.256 --rc genhtml_branch_coverage=1 00:24:57.256 --rc genhtml_function_coverage=1 00:24:57.256 --rc genhtml_legend=1 00:24:57.256 --rc geninfo_all_blocks=1 00:24:57.256 --rc geninfo_unexecuted_blocks=1 00:24:57.256 00:24:57.256 ' 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.256 ************************************ 00:24:57.256 START TEST nvmf_multicontroller 00:24:57.256 ************************************ 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:57.256 * Looking for test storage... 00:24:57.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:57.256 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:57.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.257 --rc genhtml_branch_coverage=1 00:24:57.257 --rc genhtml_function_coverage=1 00:24:57.257 --rc genhtml_legend=1 00:24:57.257 --rc geninfo_all_blocks=1 00:24:57.257 --rc geninfo_unexecuted_blocks=1 00:24:57.257 00:24:57.257 ' 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:57.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.257 --rc genhtml_branch_coverage=1 00:24:57.257 --rc genhtml_function_coverage=1 00:24:57.257 --rc genhtml_legend=1 00:24:57.257 --rc geninfo_all_blocks=1 00:24:57.257 --rc geninfo_unexecuted_blocks=1 00:24:57.257 00:24:57.257 ' 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:57.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.257 --rc genhtml_branch_coverage=1 00:24:57.257 --rc genhtml_function_coverage=1 00:24:57.257 --rc genhtml_legend=1 00:24:57.257 --rc geninfo_all_blocks=1 00:24:57.257 --rc geninfo_unexecuted_blocks=1 00:24:57.257 00:24:57.257 ' 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:57.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.257 --rc genhtml_branch_coverage=1 00:24:57.257 --rc genhtml_function_coverage=1 00:24:57.257 --rc genhtml_legend=1 00:24:57.257 --rc geninfo_all_blocks=1 00:24:57.257 --rc geninfo_unexecuted_blocks=1 00:24:57.257 00:24:57.257 ' 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.257 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.527 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.527 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:57.527 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:57.527 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.527 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.527 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.527 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.527 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.527 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.527 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.527 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.527 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.527 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.528 21:17:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.667 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.667 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.667 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.667 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.667 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.667 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.667 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.667 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:05.668 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:05.668 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:05.668 Found net devices under 0000:31:00.0: cvl_0_0 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:05.668 Found net devices under 0000:31:00.1: cvl_0_1 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:25:05.668 00:25:05.668 --- 10.0.0.2 ping statistics --- 00:25:05.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.668 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:25:05.668 00:25:05.668 --- 10.0.0.1 ping statistics --- 00:25:05.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.668 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:05.668 21:18:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:05.668 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:05.669 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:05.669 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:05.669 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.669 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2184522 00:25:05.669 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2184522 00:25:05.669 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:05.669 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2184522 ']' 00:25:05.669 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.669 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.669 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.669 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.669 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:05.669 [2024-12-05 21:18:07.098382] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:25:05.669 [2024-12-05 21:18:07.098449] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.929 [2024-12-05 21:18:07.208994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:05.929 [2024-12-05 21:18:07.261545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.929 [2024-12-05 21:18:07.261601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.929 [2024-12-05 21:18:07.261610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.929 [2024-12-05 21:18:07.261618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.929 [2024-12-05 21:18:07.261624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.929 [2024-12-05 21:18:07.263730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.929 [2024-12-05 21:18:07.263911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.929 [2024-12-05 21:18:07.263911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:06.501 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.501 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:06.501 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:06.501 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:06.501 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.763 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.763 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:06.763 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.763 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.763 [2024-12-05 21:18:07.956798] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.763 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.763 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:06.763 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.763 21:18:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.763 Malloc0 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.763 [2024-12-05 21:18:08.034508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.763 [2024-12-05 21:18:08.046433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.763 Malloc1 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.763 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2184806 00:25:06.764 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:06.764 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:06.764 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2184806 /var/tmp/bdevperf.sock 00:25:06.764 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2184806 ']' 00:25:06.764 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:06.764 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.764 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:06.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:06.764 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.764 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.707 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.707 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:07.707 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:07.707 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.707 21:18:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.707 NVMe0n1 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.707 1 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.707 request: 00:25:07.707 { 00:25:07.707 "name": "NVMe0", 00:25:07.707 "trtype": "tcp", 00:25:07.707 "traddr": "10.0.0.2", 00:25:07.707 "adrfam": "ipv4", 00:25:07.707 "trsvcid": "4420", 00:25:07.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.707 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:07.707 "hostaddr": "10.0.0.1", 00:25:07.707 "prchk_reftag": false, 00:25:07.707 "prchk_guard": false, 00:25:07.707 "hdgst": false, 00:25:07.707 "ddgst": false, 00:25:07.707 "allow_unrecognized_csi": false, 00:25:07.707 "method": "bdev_nvme_attach_controller", 00:25:07.707 "req_id": 1 00:25:07.707 } 00:25:07.707 Got JSON-RPC error response 00:25:07.707 response: 00:25:07.707 { 00:25:07.707 "code": -114, 00:25:07.707 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:07.707 } 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.707 request: 00:25:07.707 { 00:25:07.707 "name": "NVMe0", 00:25:07.707 "trtype": "tcp", 00:25:07.707 "traddr": "10.0.0.2", 00:25:07.707 "adrfam": "ipv4", 00:25:07.707 "trsvcid": "4420", 00:25:07.707 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:07.707 "hostaddr": "10.0.0.1", 00:25:07.707 "prchk_reftag": false, 00:25:07.707 "prchk_guard": false, 00:25:07.707 "hdgst": false, 00:25:07.707 "ddgst": false, 00:25:07.707 "allow_unrecognized_csi": false, 00:25:07.707 "method": "bdev_nvme_attach_controller", 00:25:07.707 "req_id": 1 00:25:07.707 } 00:25:07.707 Got JSON-RPC error response 00:25:07.707 response: 00:25:07.707 { 00:25:07.707 "code": -114, 00:25:07.707 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:07.707 } 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.707 request: 00:25:07.707 { 00:25:07.707 "name": "NVMe0", 00:25:07.707 "trtype": "tcp", 00:25:07.707 "traddr": "10.0.0.2", 00:25:07.707 "adrfam": "ipv4", 00:25:07.707 "trsvcid": "4420", 00:25:07.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.707 "hostaddr": "10.0.0.1", 00:25:07.707 "prchk_reftag": false, 00:25:07.707 "prchk_guard": false, 00:25:07.707 "hdgst": false, 00:25:07.707 "ddgst": false, 00:25:07.707 "multipath": "disable", 00:25:07.707 "allow_unrecognized_csi": false, 00:25:07.707 "method": "bdev_nvme_attach_controller", 00:25:07.707 "req_id": 1 00:25:07.707 } 00:25:07.707 Got JSON-RPC error response 00:25:07.707 response: 00:25:07.707 { 00:25:07.707 "code": -114, 00:25:07.707 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:25:07.707 } 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:07.707 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:07.708 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.708 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:07.708 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.708 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:25:07.708 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.708 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.708 request: 00:25:07.708 { 00:25:07.708 "name": "NVMe0", 00:25:07.708 "trtype": "tcp", 00:25:07.708 "traddr": "10.0.0.2", 00:25:07.708 "adrfam": "ipv4", 00:25:07.708 "trsvcid": "4420", 00:25:07.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.708 "hostaddr": "10.0.0.1", 00:25:07.708 "prchk_reftag": false, 00:25:07.708 "prchk_guard": false, 00:25:07.708 "hdgst": false, 00:25:07.708 "ddgst": false, 00:25:07.708 "multipath": "failover", 00:25:07.708 "allow_unrecognized_csi": false, 00:25:07.708 "method": "bdev_nvme_attach_controller", 00:25:07.708 "req_id": 1 00:25:07.708 } 00:25:07.708 Got JSON-RPC error response 00:25:07.708 response: 00:25:07.708 { 00:25:07.708 "code": -114, 00:25:07.708 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:07.708 } 00:25:07.708 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:07.708 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:07.708 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:07.708 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:07.708 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:07.708 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:07.708 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.708 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.968 NVMe0n1 00:25:07.968 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.968 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:07.968 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.968 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:07.968 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.969 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:07.969 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.969 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:08.229 00:25:08.229 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.229 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:08.229 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:08.229 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.229 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:08.229 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.229 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:08.229 21:18:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:09.170 { 00:25:09.170 "results": [ 00:25:09.170 { 00:25:09.170 "job": "NVMe0n1", 00:25:09.170 "core_mask": "0x1", 00:25:09.170 "workload": "write", 00:25:09.170 "status": "finished", 00:25:09.170 "queue_depth": 128, 00:25:09.170 "io_size": 4096, 00:25:09.170 "runtime": 1.003459, 00:25:09.170 "iops": 28800.379487353246, 00:25:09.170 "mibps": 112.50148237247362, 00:25:09.170 "io_failed": 0, 00:25:09.170 "io_timeout": 0, 00:25:09.170 "avg_latency_us": 4436.053510495963, 00:25:09.170 "min_latency_us": 2061.653333333333, 00:25:09.170 "max_latency_us": 8410.453333333333 00:25:09.170 } 00:25:09.170 ], 00:25:09.170 "core_count": 1 00:25:09.170 } 00:25:09.170 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:09.170 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.170 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:09.170 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.170 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:25:09.170 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2184806 00:25:09.170 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2184806 ']' 00:25:09.170 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2184806 00:25:09.170 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2184806 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2184806' 00:25:09.431 killing process with pid 2184806 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2184806 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2184806 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:25:09.431 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:25:09.431 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:09.431 [2024-12-05 21:18:08.168309] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:25:09.431 [2024-12-05 21:18:08.168366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2184806 ] 00:25:09.431 [2024-12-05 21:18:08.246391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.431 [2024-12-05 21:18:08.282569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.431 [2024-12-05 21:18:09.445208] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 59992237-60ed-4358-b5e7-67b8c567701e already exists 00:25:09.431 [2024-12-05 21:18:09.445238] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:59992237-60ed-4358-b5e7-67b8c567701e alias for bdev NVMe1n1 00:25:09.431 [2024-12-05 21:18:09.445248] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:09.431 Running I/O for 1 seconds... 00:25:09.431 28772.00 IOPS, 112.39 MiB/s 00:25:09.431 Latency(us) 00:25:09.431 [2024-12-05T20:18:10.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.431 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:09.431 NVMe0n1 : 1.00 28800.38 112.50 0.00 0.00 4436.05 2061.65 8410.45 00:25:09.431 [2024-12-05T20:18:10.868Z] =================================================================================================================== 00:25:09.431 [2024-12-05T20:18:10.868Z] Total : 28800.38 112.50 0.00 0.00 4436.05 2061.65 8410.45 00:25:09.431 Received shutdown signal, test time was about 1.000000 seconds 00:25:09.431 00:25:09.431 Latency(us) 00:25:09.431 [2024-12-05T20:18:10.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.432 [2024-12-05T20:18:10.869Z] =================================================================================================================== 00:25:09.432 [2024-12-05T20:18:10.869Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.432 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:09.432 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:09.432 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:25:09.432 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:25:09.432 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:09.432 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:25:09.432 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:09.432 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:25:09.432 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:09.432 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:09.432 rmmod nvme_tcp 00:25:09.432 rmmod nvme_fabrics 00:25:09.693 rmmod nvme_keyring 00:25:09.693 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.693 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:25:09.693 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:25:09.693 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2184522 ']' 00:25:09.693 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2184522 00:25:09.693 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2184522 ']' 00:25:09.693 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2184522 00:25:09.693 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:25:09.693 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.693 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2184522 00:25:09.693 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:09.693 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:09.693 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2184522' 00:25:09.693 killing process with pid 2184522 00:25:09.693 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2184522 00:25:09.693 21:18:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2184522 00:25:09.693 21:18:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.693 21:18:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.693 21:18:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.693 21:18:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:25:09.693 21:18:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:25:09.693 21:18:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.693 21:18:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.955 21:18:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.955 21:18:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.955 21:18:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.955 21:18:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.955 21:18:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.868 21:18:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.868 00:25:11.868 real 0m14.739s 00:25:11.868 user 0m17.023s 00:25:11.868 sys 0m7.021s 00:25:11.868 21:18:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.868 21:18:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:11.868 ************************************ 00:25:11.868 END TEST nvmf_multicontroller 00:25:11.868 ************************************ 00:25:11.868 21:18:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:11.868 21:18:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:11.868 21:18:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.868 21:18:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.868 ************************************ 00:25:11.868 START TEST nvmf_aer 00:25:11.868 ************************************ 00:25:11.868 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:12.130 * Looking for test storage... 00:25:12.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:12.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.130 --rc genhtml_branch_coverage=1 00:25:12.130 --rc genhtml_function_coverage=1 00:25:12.130 --rc genhtml_legend=1 00:25:12.130 --rc geninfo_all_blocks=1 00:25:12.130 --rc geninfo_unexecuted_blocks=1 00:25:12.130 00:25:12.130 ' 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:12.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.130 --rc genhtml_branch_coverage=1 00:25:12.130 --rc genhtml_function_coverage=1 00:25:12.130 --rc genhtml_legend=1 00:25:12.130 --rc geninfo_all_blocks=1 00:25:12.130 --rc geninfo_unexecuted_blocks=1 00:25:12.130 00:25:12.130 ' 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:12.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.130 --rc genhtml_branch_coverage=1 00:25:12.130 --rc genhtml_function_coverage=1 00:25:12.130 --rc genhtml_legend=1 00:25:12.130 --rc geninfo_all_blocks=1 00:25:12.130 --rc geninfo_unexecuted_blocks=1 00:25:12.130 00:25:12.130 ' 00:25:12.130 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:12.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.130 --rc genhtml_branch_coverage=1 00:25:12.130 --rc genhtml_function_coverage=1 00:25:12.130 --rc genhtml_legend=1 00:25:12.130 --rc geninfo_all_blocks=1 00:25:12.130 --rc geninfo_unexecuted_blocks=1 00:25:12.131 00:25:12.131 ' 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:12.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:25:12.131 21:18:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:20.269 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.269 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:20.270 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:20.270 Found net devices under 0000:31:00.0: cvl_0_0 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:20.270 Found net devices under 0000:31:00.1: cvl_0_1 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:20.270 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:20.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:25:20.531 00:25:20.531 --- 10.0.0.2 ping statistics --- 00:25:20.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.531 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:25:20.531 00:25:20.531 --- 10.0.0.1 ping statistics --- 00:25:20.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.531 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2190613 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2190613 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2190613 ']' 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.531 21:18:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:20.531 [2024-12-05 21:18:21.854840] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:25:20.531 [2024-12-05 21:18:21.854913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.531 [2024-12-05 21:18:21.946122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:20.791 [2024-12-05 21:18:21.988186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.791 [2024-12-05 21:18:21.988223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.791 [2024-12-05 21:18:21.988231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.791 [2024-12-05 21:18:21.988238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.791 [2024-12-05 21:18:21.988244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.791 [2024-12-05 21:18:21.990135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.791 [2024-12-05 21:18:21.990353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.791 [2024-12-05 21:18:21.990510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.791 [2024-12-05 21:18:21.990510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.362 [2024-12-05 21:18:22.714821] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.362 Malloc0 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.362 [2024-12-05 21:18:22.789263] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.362 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.623 [ 00:25:21.623 { 00:25:21.623 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:21.623 "subtype": "Discovery", 00:25:21.623 "listen_addresses": [], 00:25:21.623 "allow_any_host": true, 00:25:21.623 "hosts": [] 00:25:21.623 }, 00:25:21.623 { 00:25:21.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.623 "subtype": "NVMe", 00:25:21.623 "listen_addresses": [ 00:25:21.623 { 00:25:21.623 "trtype": "TCP", 00:25:21.623 "adrfam": "IPv4", 00:25:21.623 "traddr": "10.0.0.2", 00:25:21.623 "trsvcid": "4420" 00:25:21.623 } 00:25:21.623 ], 00:25:21.623 "allow_any_host": true, 00:25:21.623 "hosts": [], 00:25:21.623 "serial_number": "SPDK00000000000001", 00:25:21.623 "model_number": "SPDK bdev Controller", 00:25:21.623 "max_namespaces": 2, 00:25:21.623 "min_cntlid": 1, 00:25:21.623 "max_cntlid": 65519, 00:25:21.623 "namespaces": [ 00:25:21.623 { 00:25:21.623 "nsid": 1, 00:25:21.623 "bdev_name": "Malloc0", 00:25:21.623 "name": "Malloc0", 00:25:21.623 "nguid": "CA3F26DB25AD4F7C948499B056178A70", 00:25:21.623 "uuid": "ca3f26db-25ad-4f7c-9484-99b056178a70" 00:25:21.623 } 00:25:21.623 ] 00:25:21.623 } 00:25:21.623 ] 00:25:21.623 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.623 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:21.623 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:21.623 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2190690 00:25:21.623 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:21.623 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:21.623 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:25:21.623 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:21.623 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:25:21.623 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:25:21.623 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:21.623 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:21.623 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:25:21.623 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:25:21.623 21:18:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:21.623 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:21.623 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:25:21.623 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:25:21.623 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.885 Malloc1 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.885 Asynchronous Event Request test 00:25:21.885 Attaching to 10.0.0.2 00:25:21.885 Attached to 10.0.0.2 00:25:21.885 Registering asynchronous event callbacks... 00:25:21.885 Starting namespace attribute notice tests for all controllers... 00:25:21.885 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:21.885 aer_cb - Changed Namespace 00:25:21.885 Cleaning up... 00:25:21.885 [ 00:25:21.885 { 00:25:21.885 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:21.885 "subtype": "Discovery", 00:25:21.885 "listen_addresses": [], 00:25:21.885 "allow_any_host": true, 00:25:21.885 "hosts": [] 00:25:21.885 }, 00:25:21.885 { 00:25:21.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.885 "subtype": "NVMe", 00:25:21.885 "listen_addresses": [ 00:25:21.885 { 00:25:21.885 "trtype": "TCP", 00:25:21.885 "adrfam": "IPv4", 00:25:21.885 "traddr": "10.0.0.2", 00:25:21.885 "trsvcid": "4420" 00:25:21.885 } 00:25:21.885 ], 00:25:21.885 "allow_any_host": true, 00:25:21.885 "hosts": [], 00:25:21.885 "serial_number": "SPDK00000000000001", 00:25:21.885 "model_number": "SPDK bdev Controller", 00:25:21.885 "max_namespaces": 2, 00:25:21.885 "min_cntlid": 1, 00:25:21.885 "max_cntlid": 65519, 00:25:21.885 "namespaces": [ 00:25:21.885 { 00:25:21.885 "nsid": 1, 00:25:21.885 "bdev_name": "Malloc0", 00:25:21.885 "name": "Malloc0", 00:25:21.885 "nguid": "CA3F26DB25AD4F7C948499B056178A70", 00:25:21.885 "uuid": "ca3f26db-25ad-4f7c-9484-99b056178a70" 00:25:21.885 }, 00:25:21.885 { 00:25:21.885 "nsid": 2, 00:25:21.885 "bdev_name": "Malloc1", 00:25:21.885 "name": "Malloc1", 00:25:21.885 "nguid": "A9B9859B68DC4410B13A8C3C3319BBCC", 00:25:21.885 "uuid": "a9b9859b-68dc-4410-b13a-8c3c3319bbcc" 00:25:21.885 } 00:25:21.885 ] 00:25:21.885 } 00:25:21.885 ] 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2190690 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:21.885 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:21.885 rmmod nvme_tcp 00:25:21.885 rmmod nvme_fabrics 00:25:21.885 rmmod nvme_keyring 00:25:22.147 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:22.147 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:25:22.147 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:25:22.147 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2190613 ']' 00:25:22.147 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2190613 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2190613 ']' 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2190613 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2190613 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2190613' 00:25:22.148 killing process with pid 2190613 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2190613 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2190613 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.148 21:18:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:24.690 00:25:24.690 real 0m12.330s 00:25:24.690 user 0m8.656s 00:25:24.690 sys 0m6.655s 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:24.690 ************************************ 00:25:24.690 END TEST nvmf_aer 00:25:24.690 ************************************ 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.690 ************************************ 00:25:24.690 START TEST nvmf_async_init 00:25:24.690 ************************************ 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:24.690 * Looking for test storage... 00:25:24.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:24.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.690 --rc genhtml_branch_coverage=1 00:25:24.690 --rc genhtml_function_coverage=1 00:25:24.690 --rc genhtml_legend=1 00:25:24.690 --rc geninfo_all_blocks=1 00:25:24.690 --rc geninfo_unexecuted_blocks=1 00:25:24.690 00:25:24.690 ' 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:24.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.690 --rc genhtml_branch_coverage=1 00:25:24.690 --rc genhtml_function_coverage=1 00:25:24.690 --rc genhtml_legend=1 00:25:24.690 --rc geninfo_all_blocks=1 00:25:24.690 --rc geninfo_unexecuted_blocks=1 00:25:24.690 00:25:24.690 ' 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:24.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.690 --rc genhtml_branch_coverage=1 00:25:24.690 --rc genhtml_function_coverage=1 00:25:24.690 --rc genhtml_legend=1 00:25:24.690 --rc geninfo_all_blocks=1 00:25:24.690 --rc geninfo_unexecuted_blocks=1 00:25:24.690 00:25:24.690 ' 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:24.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.690 --rc genhtml_branch_coverage=1 00:25:24.690 --rc genhtml_function_coverage=1 00:25:24.690 --rc genhtml_legend=1 00:25:24.690 --rc geninfo_all_blocks=1 00:25:24.690 --rc geninfo_unexecuted_blocks=1 00:25:24.690 00:25:24.690 ' 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.690 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:24.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0b6da3c8b3d34a359478d631dff27d64 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:25:24.691 21:18:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:32.820 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:32.820 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:32.820 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:32.821 Found net devices under 0000:31:00.0: cvl_0_0 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:32.821 Found net devices under 0000:31:00.1: cvl_0_1 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:32.821 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:33.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:25:33.082 00:25:33.082 --- 10.0.0.2 ping statistics --- 00:25:33.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.082 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:25:33.082 00:25:33.082 --- 10.0.0.1 ping statistics --- 00:25:33.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.082 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2195653 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2195653 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2195653 ']' 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.082 21:18:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.082 [2024-12-05 21:18:34.477499] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:25:33.082 [2024-12-05 21:18:34.477567] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.342 [2024-12-05 21:18:34.568003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.342 [2024-12-05 21:18:34.607878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.342 [2024-12-05 21:18:34.607914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.342 [2024-12-05 21:18:34.607922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.342 [2024-12-05 21:18:34.607928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.342 [2024-12-05 21:18:34.607934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.342 [2024-12-05 21:18:34.608558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.911 [2024-12-05 21:18:35.322913] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.911 null0 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.911 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0b6da3c8b3d34a359478d631dff27d64 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.171 [2024-12-05 21:18:35.363150] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.171 nvme0n1 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.171 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.171 [ 00:25:34.171 { 00:25:34.171 "name": "nvme0n1", 00:25:34.171 "aliases": [ 00:25:34.171 "0b6da3c8-b3d3-4a35-9478-d631dff27d64" 00:25:34.171 ], 00:25:34.171 "product_name": "NVMe disk", 00:25:34.171 "block_size": 512, 00:25:34.171 "num_blocks": 2097152, 00:25:34.171 "uuid": "0b6da3c8-b3d3-4a35-9478-d631dff27d64", 00:25:34.171 "numa_id": 0, 00:25:34.171 "assigned_rate_limits": { 00:25:34.171 "rw_ios_per_sec": 0, 00:25:34.171 "rw_mbytes_per_sec": 0, 00:25:34.171 "r_mbytes_per_sec": 0, 00:25:34.171 "w_mbytes_per_sec": 0 00:25:34.171 }, 00:25:34.432 "claimed": false, 00:25:34.432 "zoned": false, 00:25:34.432 "supported_io_types": { 00:25:34.432 "read": true, 00:25:34.432 "write": true, 00:25:34.432 "unmap": false, 00:25:34.432 "flush": true, 00:25:34.432 "reset": true, 00:25:34.432 "nvme_admin": true, 00:25:34.432 "nvme_io": true, 00:25:34.432 "nvme_io_md": false, 00:25:34.432 "write_zeroes": true, 00:25:34.432 "zcopy": false, 00:25:34.432 "get_zone_info": false, 00:25:34.432 "zone_management": false, 00:25:34.432 "zone_append": false, 00:25:34.432 "compare": true, 00:25:34.432 "compare_and_write": true, 00:25:34.432 "abort": true, 00:25:34.432 "seek_hole": false, 00:25:34.432 "seek_data": false, 00:25:34.432 "copy": true, 00:25:34.432 "nvme_iov_md": false 00:25:34.432 }, 00:25:34.432 "memory_domains": [ 00:25:34.432 { 00:25:34.432 "dma_device_id": "system", 00:25:34.432 "dma_device_type": 1 00:25:34.432 } 00:25:34.432 ], 00:25:34.432 "driver_specific": { 00:25:34.432 "nvme": [ 00:25:34.432 { 00:25:34.432 "trid": { 00:25:34.433 "trtype": "TCP", 00:25:34.433 "adrfam": "IPv4", 00:25:34.433 "traddr": "10.0.0.2", 00:25:34.433 "trsvcid": "4420", 00:25:34.433 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:34.433 }, 00:25:34.433 "ctrlr_data": { 00:25:34.433 "cntlid": 1, 00:25:34.433 "vendor_id": "0x8086", 00:25:34.433 "model_number": "SPDK bdev Controller", 00:25:34.433 "serial_number": "00000000000000000000", 00:25:34.433 "firmware_revision": "25.01", 00:25:34.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:34.433 "oacs": { 00:25:34.433 "security": 0, 00:25:34.433 "format": 0, 00:25:34.433 "firmware": 0, 00:25:34.433 "ns_manage": 0 00:25:34.433 }, 00:25:34.433 "multi_ctrlr": true, 00:25:34.433 "ana_reporting": false 00:25:34.433 }, 00:25:34.433 "vs": { 00:25:34.433 "nvme_version": "1.3" 00:25:34.433 }, 00:25:34.433 "ns_data": { 00:25:34.433 "id": 1, 00:25:34.433 "can_share": true 00:25:34.433 } 00:25:34.433 } 00:25:34.433 ], 00:25:34.433 "mp_policy": "active_passive" 00:25:34.433 } 00:25:34.433 } 00:25:34.433 ] 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.433 [2024-12-05 21:18:35.619645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:34.433 [2024-12-05 21:18:35.619707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1860c40 (9): Bad file descriptor 00:25:34.433 [2024-12-05 21:18:35.751959] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.433 [ 00:25:34.433 { 00:25:34.433 "name": "nvme0n1", 00:25:34.433 "aliases": [ 00:25:34.433 "0b6da3c8-b3d3-4a35-9478-d631dff27d64" 00:25:34.433 ], 00:25:34.433 "product_name": "NVMe disk", 00:25:34.433 "block_size": 512, 00:25:34.433 "num_blocks": 2097152, 00:25:34.433 "uuid": "0b6da3c8-b3d3-4a35-9478-d631dff27d64", 00:25:34.433 "numa_id": 0, 00:25:34.433 "assigned_rate_limits": { 00:25:34.433 "rw_ios_per_sec": 0, 00:25:34.433 "rw_mbytes_per_sec": 0, 00:25:34.433 "r_mbytes_per_sec": 0, 00:25:34.433 "w_mbytes_per_sec": 0 00:25:34.433 }, 00:25:34.433 "claimed": false, 00:25:34.433 "zoned": false, 00:25:34.433 "supported_io_types": { 00:25:34.433 "read": true, 00:25:34.433 "write": true, 00:25:34.433 "unmap": false, 00:25:34.433 "flush": true, 00:25:34.433 "reset": true, 00:25:34.433 "nvme_admin": true, 00:25:34.433 "nvme_io": true, 00:25:34.433 "nvme_io_md": false, 00:25:34.433 "write_zeroes": true, 00:25:34.433 "zcopy": false, 00:25:34.433 "get_zone_info": false, 00:25:34.433 "zone_management": false, 00:25:34.433 "zone_append": false, 00:25:34.433 "compare": true, 00:25:34.433 "compare_and_write": true, 00:25:34.433 "abort": true, 00:25:34.433 "seek_hole": false, 00:25:34.433 "seek_data": false, 00:25:34.433 "copy": true, 00:25:34.433 "nvme_iov_md": false 00:25:34.433 }, 00:25:34.433 "memory_domains": [ 00:25:34.433 { 00:25:34.433 "dma_device_id": "system", 00:25:34.433 "dma_device_type": 1 00:25:34.433 } 00:25:34.433 ], 00:25:34.433 "driver_specific": { 00:25:34.433 "nvme": [ 00:25:34.433 { 00:25:34.433 "trid": { 00:25:34.433 "trtype": "TCP", 00:25:34.433 "adrfam": "IPv4", 00:25:34.433 "traddr": "10.0.0.2", 00:25:34.433 "trsvcid": "4420", 00:25:34.433 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:34.433 }, 00:25:34.433 "ctrlr_data": { 00:25:34.433 "cntlid": 2, 00:25:34.433 "vendor_id": "0x8086", 00:25:34.433 "model_number": "SPDK bdev Controller", 00:25:34.433 "serial_number": "00000000000000000000", 00:25:34.433 "firmware_revision": "25.01", 00:25:34.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:34.433 "oacs": { 00:25:34.433 "security": 0, 00:25:34.433 "format": 0, 00:25:34.433 "firmware": 0, 00:25:34.433 "ns_manage": 0 00:25:34.433 }, 00:25:34.433 "multi_ctrlr": true, 00:25:34.433 "ana_reporting": false 00:25:34.433 }, 00:25:34.433 "vs": { 00:25:34.433 "nvme_version": "1.3" 00:25:34.433 }, 00:25:34.433 "ns_data": { 00:25:34.433 "id": 1, 00:25:34.433 "can_share": true 00:25:34.433 } 00:25:34.433 } 00:25:34.433 ], 00:25:34.433 "mp_policy": "active_passive" 00:25:34.433 } 00:25:34.433 } 00:25:34.433 ] 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.G0NzQZnd3x 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.G0NzQZnd3x 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.G0NzQZnd3x 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.433 [2024-12-05 21:18:35.820273] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:34.433 [2024-12-05 21:18:35.820382] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.433 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.433 [2024-12-05 21:18:35.836330] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:34.694 nvme0n1 00:25:34.694 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.694 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:34.694 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.694 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.694 [ 00:25:34.694 { 00:25:34.694 "name": "nvme0n1", 00:25:34.694 "aliases": [ 00:25:34.694 "0b6da3c8-b3d3-4a35-9478-d631dff27d64" 00:25:34.694 ], 00:25:34.694 "product_name": "NVMe disk", 00:25:34.695 "block_size": 512, 00:25:34.695 "num_blocks": 2097152, 00:25:34.695 "uuid": "0b6da3c8-b3d3-4a35-9478-d631dff27d64", 00:25:34.695 "numa_id": 0, 00:25:34.695 "assigned_rate_limits": { 00:25:34.695 "rw_ios_per_sec": 0, 00:25:34.695 "rw_mbytes_per_sec": 0, 00:25:34.695 "r_mbytes_per_sec": 0, 00:25:34.695 "w_mbytes_per_sec": 0 00:25:34.695 }, 00:25:34.695 "claimed": false, 00:25:34.695 "zoned": false, 00:25:34.695 "supported_io_types": { 00:25:34.695 "read": true, 00:25:34.695 "write": true, 00:25:34.695 "unmap": false, 00:25:34.695 "flush": true, 00:25:34.695 "reset": true, 00:25:34.695 "nvme_admin": true, 00:25:34.695 "nvme_io": true, 00:25:34.695 "nvme_io_md": false, 00:25:34.695 "write_zeroes": true, 00:25:34.695 "zcopy": false, 00:25:34.695 "get_zone_info": false, 00:25:34.695 "zone_management": false, 00:25:34.695 "zone_append": false, 00:25:34.695 "compare": true, 00:25:34.695 "compare_and_write": true, 00:25:34.695 "abort": true, 00:25:34.695 "seek_hole": false, 00:25:34.695 "seek_data": false, 00:25:34.695 "copy": true, 00:25:34.695 "nvme_iov_md": false 00:25:34.695 }, 00:25:34.695 "memory_domains": [ 00:25:34.695 { 00:25:34.695 "dma_device_id": "system", 00:25:34.695 "dma_device_type": 1 00:25:34.695 } 00:25:34.695 ], 00:25:34.695 "driver_specific": { 00:25:34.695 "nvme": [ 00:25:34.695 { 00:25:34.695 "trid": { 00:25:34.695 "trtype": "TCP", 00:25:34.695 "adrfam": "IPv4", 00:25:34.695 "traddr": "10.0.0.2", 00:25:34.695 "trsvcid": "4421", 00:25:34.695 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:34.695 }, 00:25:34.695 "ctrlr_data": { 00:25:34.695 "cntlid": 3, 00:25:34.695 "vendor_id": "0x8086", 00:25:34.695 "model_number": "SPDK bdev Controller", 00:25:34.695 "serial_number": "00000000000000000000", 00:25:34.695 "firmware_revision": "25.01", 00:25:34.695 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:34.695 "oacs": { 00:25:34.695 "security": 0, 00:25:34.695 "format": 0, 00:25:34.695 "firmware": 0, 00:25:34.695 "ns_manage": 0 00:25:34.695 }, 00:25:34.695 "multi_ctrlr": true, 00:25:34.695 "ana_reporting": false 00:25:34.695 }, 00:25:34.695 "vs": { 00:25:34.695 "nvme_version": "1.3" 00:25:34.695 }, 00:25:34.695 "ns_data": { 00:25:34.695 "id": 1, 00:25:34.695 "can_share": true 00:25:34.695 } 00:25:34.695 } 00:25:34.695 ], 00:25:34.695 "mp_policy": "active_passive" 00:25:34.695 } 00:25:34.695 } 00:25:34.695 ] 00:25:34.695 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.695 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.695 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.695 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:34.695 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.695 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.G0NzQZnd3x 00:25:34.695 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:34.695 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:34.695 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:34.695 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:34.695 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:34.695 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:34.695 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:34.695 21:18:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:34.695 rmmod nvme_tcp 00:25:34.695 rmmod nvme_fabrics 00:25:34.695 rmmod nvme_keyring 00:25:34.695 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:34.695 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:34.695 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:34.695 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2195653 ']' 00:25:34.695 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2195653 00:25:34.695 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2195653 ']' 00:25:34.695 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2195653 00:25:34.695 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:25:34.695 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:34.695 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2195653 00:25:34.695 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:34.695 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:34.695 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2195653' 00:25:34.695 killing process with pid 2195653 00:25:34.695 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2195653 00:25:34.695 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2195653 00:25:34.955 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:34.955 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:34.955 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:34.955 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:25:34.955 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:25:34.955 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:34.955 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:25:34.955 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:34.955 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:34.955 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.955 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.955 21:18:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.869 21:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:36.869 00:25:36.869 real 0m12.590s 00:25:36.869 user 0m4.386s 00:25:36.869 sys 0m6.737s 00:25:36.869 21:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:36.869 21:18:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:36.869 ************************************ 00:25:36.869 END TEST nvmf_async_init 00:25:36.869 ************************************ 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.131 ************************************ 00:25:37.131 START TEST dma 00:25:37.131 ************************************ 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:37.131 * Looking for test storage... 00:25:37.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:37.131 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:37.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.394 --rc genhtml_branch_coverage=1 00:25:37.394 --rc genhtml_function_coverage=1 00:25:37.394 --rc genhtml_legend=1 00:25:37.394 --rc geninfo_all_blocks=1 00:25:37.394 --rc geninfo_unexecuted_blocks=1 00:25:37.394 00:25:37.394 ' 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:37.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.394 --rc genhtml_branch_coverage=1 00:25:37.394 --rc genhtml_function_coverage=1 00:25:37.394 --rc genhtml_legend=1 00:25:37.394 --rc geninfo_all_blocks=1 00:25:37.394 --rc geninfo_unexecuted_blocks=1 00:25:37.394 00:25:37.394 ' 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:37.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.394 --rc genhtml_branch_coverage=1 00:25:37.394 --rc genhtml_function_coverage=1 00:25:37.394 --rc genhtml_legend=1 00:25:37.394 --rc geninfo_all_blocks=1 00:25:37.394 --rc geninfo_unexecuted_blocks=1 00:25:37.394 00:25:37.394 ' 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:37.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.394 --rc genhtml_branch_coverage=1 00:25:37.394 --rc genhtml_function_coverage=1 00:25:37.394 --rc genhtml_legend=1 00:25:37.394 --rc geninfo_all_blocks=1 00:25:37.394 --rc geninfo_unexecuted_blocks=1 00:25:37.394 00:25:37.394 ' 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.394 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:37.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:37.395 00:25:37.395 real 0m0.234s 00:25:37.395 user 0m0.134s 00:25:37.395 sys 0m0.114s 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:37.395 ************************************ 00:25:37.395 END TEST dma 00:25:37.395 ************************************ 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.395 ************************************ 00:25:37.395 START TEST nvmf_identify 00:25:37.395 ************************************ 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:37.395 * Looking for test storage... 00:25:37.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:25:37.395 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:37.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.657 --rc genhtml_branch_coverage=1 00:25:37.657 --rc genhtml_function_coverage=1 00:25:37.657 --rc genhtml_legend=1 00:25:37.657 --rc geninfo_all_blocks=1 00:25:37.657 --rc geninfo_unexecuted_blocks=1 00:25:37.657 00:25:37.657 ' 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:37.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.657 --rc genhtml_branch_coverage=1 00:25:37.657 --rc genhtml_function_coverage=1 00:25:37.657 --rc genhtml_legend=1 00:25:37.657 --rc geninfo_all_blocks=1 00:25:37.657 --rc geninfo_unexecuted_blocks=1 00:25:37.657 00:25:37.657 ' 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:37.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.657 --rc genhtml_branch_coverage=1 00:25:37.657 --rc genhtml_function_coverage=1 00:25:37.657 --rc genhtml_legend=1 00:25:37.657 --rc geninfo_all_blocks=1 00:25:37.657 --rc geninfo_unexecuted_blocks=1 00:25:37.657 00:25:37.657 ' 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:37.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.657 --rc genhtml_branch_coverage=1 00:25:37.657 --rc genhtml_function_coverage=1 00:25:37.657 --rc genhtml_legend=1 00:25:37.657 --rc geninfo_all_blocks=1 00:25:37.657 --rc geninfo_unexecuted_blocks=1 00:25:37.657 00:25:37.657 ' 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:37.657 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:37.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:25:37.658 21:18:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:45.805 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:45.805 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:45.805 Found net devices under 0000:31:00.0: cvl_0_0 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:45.805 Found net devices under 0000:31:00.1: cvl_0_1 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.805 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:46.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:25:46.068 00:25:46.068 --- 10.0.0.2 ping statistics --- 00:25:46.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.068 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:25:46.068 00:25:46.068 --- 10.0.0.1 ping statistics --- 00:25:46.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.068 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:46.068 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:46.328 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2200862 00:25:46.328 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:46.328 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:46.328 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2200862 00:25:46.328 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2200862 ']' 00:25:46.328 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.328 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:46.328 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.328 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:46.328 21:18:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:46.328 [2024-12-05 21:18:47.566220] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:25:46.328 [2024-12-05 21:18:47.566288] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.328 [2024-12-05 21:18:47.657399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:46.328 [2024-12-05 21:18:47.700431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.328 [2024-12-05 21:18:47.700471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.328 [2024-12-05 21:18:47.700479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.328 [2024-12-05 21:18:47.700485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.328 [2024-12-05 21:18:47.700491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.328 [2024-12-05 21:18:47.702066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.328 [2024-12-05 21:18:47.702186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.328 [2024-12-05 21:18:47.702343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.328 [2024-12-05 21:18:47.702344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:47.272 [2024-12-05 21:18:48.378026] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:47.272 Malloc0 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:47.272 [2024-12-05 21:18:48.486278] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:47.272 [ 00:25:47.272 { 00:25:47.272 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:47.272 "subtype": "Discovery", 00:25:47.272 "listen_addresses": [ 00:25:47.272 { 00:25:47.272 "trtype": "TCP", 00:25:47.272 "adrfam": "IPv4", 00:25:47.272 "traddr": "10.0.0.2", 00:25:47.272 "trsvcid": "4420" 00:25:47.272 } 00:25:47.272 ], 00:25:47.272 "allow_any_host": true, 00:25:47.272 "hosts": [] 00:25:47.272 }, 00:25:47.272 { 00:25:47.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.272 "subtype": "NVMe", 00:25:47.272 "listen_addresses": [ 00:25:47.272 { 00:25:47.272 "trtype": "TCP", 00:25:47.272 "adrfam": "IPv4", 00:25:47.272 "traddr": "10.0.0.2", 00:25:47.272 "trsvcid": "4420" 00:25:47.272 } 00:25:47.272 ], 00:25:47.272 "allow_any_host": true, 00:25:47.272 "hosts": [], 00:25:47.272 "serial_number": "SPDK00000000000001", 00:25:47.272 "model_number": "SPDK bdev Controller", 00:25:47.272 "max_namespaces": 32, 00:25:47.272 "min_cntlid": 1, 00:25:47.272 "max_cntlid": 65519, 00:25:47.272 "namespaces": [ 00:25:47.272 { 00:25:47.272 "nsid": 1, 00:25:47.272 "bdev_name": "Malloc0", 00:25:47.272 "name": "Malloc0", 00:25:47.272 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:47.272 "eui64": "ABCDEF0123456789", 00:25:47.272 "uuid": "cbf48556-a6e0-438f-97a5-8e22729b361d" 00:25:47.272 } 00:25:47.272 ] 00:25:47.272 } 00:25:47.272 ] 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.272 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:47.272 [2024-12-05 21:18:48.548734] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:25:47.272 [2024-12-05 21:18:48.548778] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2201099 ] 00:25:47.272 [2024-12-05 21:18:48.604054] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:47.272 [2024-12-05 21:18:48.604110] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:47.272 [2024-12-05 21:18:48.604116] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:47.272 [2024-12-05 21:18:48.604130] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:47.272 [2024-12-05 21:18:48.604138] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:47.272 [2024-12-05 21:18:48.604857] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:47.272 [2024-12-05 21:18:48.604899] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c3b550 0 00:25:47.272 [2024-12-05 21:18:48.611877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:47.272 [2024-12-05 21:18:48.611890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:47.272 [2024-12-05 21:18:48.611895] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:47.272 [2024-12-05 21:18:48.611898] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:47.273 [2024-12-05 21:18:48.611932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.611938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.611942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3b550) 00:25:47.273 [2024-12-05 21:18:48.611955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:47.273 [2024-12-05 21:18:48.611972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d100, cid 0, qid 0 00:25:47.273 [2024-12-05 21:18:48.619872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.273 [2024-12-05 21:18:48.619881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.273 [2024-12-05 21:18:48.619885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.619890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d100) on tqpair=0x1c3b550 00:25:47.273 [2024-12-05 21:18:48.619900] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:47.273 [2024-12-05 21:18:48.619907] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:47.273 [2024-12-05 21:18:48.619912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:47.273 [2024-12-05 21:18:48.619926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.619930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.619933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3b550) 00:25:47.273 [2024-12-05 21:18:48.619941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.273 [2024-12-05 21:18:48.619955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d100, cid 0, qid 0 00:25:47.273 [2024-12-05 21:18:48.620169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.273 [2024-12-05 21:18:48.620176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.273 [2024-12-05 21:18:48.620179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.620183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d100) on tqpair=0x1c3b550 00:25:47.273 [2024-12-05 21:18:48.620192] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:47.273 [2024-12-05 21:18:48.620200] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:47.273 [2024-12-05 21:18:48.620207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.620211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.620214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3b550) 00:25:47.273 [2024-12-05 21:18:48.620221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.273 [2024-12-05 21:18:48.620232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d100, cid 0, qid 0 00:25:47.273 [2024-12-05 21:18:48.620476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.273 [2024-12-05 21:18:48.620482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.273 [2024-12-05 21:18:48.620486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.620489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d100) on tqpair=0x1c3b550 00:25:47.273 [2024-12-05 21:18:48.620495] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:47.273 [2024-12-05 21:18:48.620503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:47.273 [2024-12-05 21:18:48.620509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.620513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.620517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3b550) 00:25:47.273 [2024-12-05 21:18:48.620524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.273 [2024-12-05 21:18:48.620533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d100, cid 0, qid 0 00:25:47.273 [2024-12-05 21:18:48.620777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.273 [2024-12-05 21:18:48.620784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.273 [2024-12-05 21:18:48.620787] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.620791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d100) on tqpair=0x1c3b550 00:25:47.273 [2024-12-05 21:18:48.620796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:47.273 [2024-12-05 21:18:48.620805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.620809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.620813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3b550) 00:25:47.273 [2024-12-05 21:18:48.620820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.273 [2024-12-05 21:18:48.620830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d100, cid 0, qid 0 00:25:47.273 [2024-12-05 21:18:48.621044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.273 [2024-12-05 21:18:48.621051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.273 [2024-12-05 21:18:48.621055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.621059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d100) on tqpair=0x1c3b550 00:25:47.273 [2024-12-05 21:18:48.621063] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:47.273 [2024-12-05 21:18:48.621073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:47.273 [2024-12-05 21:18:48.621081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:47.273 [2024-12-05 21:18:48.621189] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:47.273 [2024-12-05 21:18:48.621193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:47.273 [2024-12-05 21:18:48.621202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.621206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.621209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3b550) 00:25:47.273 [2024-12-05 21:18:48.621216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.273 [2024-12-05 21:18:48.621227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d100, cid 0, qid 0 00:25:47.273 [2024-12-05 21:18:48.621428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.273 [2024-12-05 21:18:48.621434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.273 [2024-12-05 21:18:48.621438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.621442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d100) on tqpair=0x1c3b550 00:25:47.273 [2024-12-05 21:18:48.621446] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:47.273 [2024-12-05 21:18:48.621455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.621459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.621463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3b550) 00:25:47.273 [2024-12-05 21:18:48.621470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.273 [2024-12-05 21:18:48.621479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d100, cid 0, qid 0 00:25:47.273 [2024-12-05 21:18:48.621696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.273 [2024-12-05 21:18:48.621702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.273 [2024-12-05 21:18:48.621706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.621710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d100) on tqpair=0x1c3b550 00:25:47.273 [2024-12-05 21:18:48.621714] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:47.273 [2024-12-05 21:18:48.621719] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:47.273 [2024-12-05 21:18:48.621727] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:47.273 [2024-12-05 21:18:48.621735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:47.273 [2024-12-05 21:18:48.621744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.621748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3b550) 00:25:47.273 [2024-12-05 21:18:48.621755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.273 [2024-12-05 21:18:48.621765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d100, cid 0, qid 0 00:25:47.273 [2024-12-05 21:18:48.621998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.273 [2024-12-05 21:18:48.622006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.273 [2024-12-05 21:18:48.622009] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.622013] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3b550): datao=0, datal=4096, cccid=0 00:25:47.273 [2024-12-05 21:18:48.622018] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9d100) on tqpair(0x1c3b550): expected_datao=0, payload_size=4096 00:25:47.273 [2024-12-05 21:18:48.622023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.622031] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.622035] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.622226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.273 [2024-12-05 21:18:48.622233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.273 [2024-12-05 21:18:48.622236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.622240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d100) on tqpair=0x1c3b550 00:25:47.273 [2024-12-05 21:18:48.622248] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:47.273 [2024-12-05 21:18:48.622256] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:47.273 [2024-12-05 21:18:48.622261] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:47.273 [2024-12-05 21:18:48.622266] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:47.273 [2024-12-05 21:18:48.622270] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:47.273 [2024-12-05 21:18:48.622275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:47.273 [2024-12-05 21:18:48.622283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:47.273 [2024-12-05 21:18:48.622290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.622294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.273 [2024-12-05 21:18:48.622298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3b550) 00:25:47.273 [2024-12-05 21:18:48.622305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:47.273 [2024-12-05 21:18:48.622316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d100, cid 0, qid 0 00:25:47.273 [2024-12-05 21:18:48.622508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.274 [2024-12-05 21:18:48.622515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.274 [2024-12-05 21:18:48.622518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.622522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d100) on tqpair=0x1c3b550 00:25:47.274 [2024-12-05 21:18:48.622530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.622534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.622537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3b550) 00:25:47.274 [2024-12-05 21:18:48.622543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.274 [2024-12-05 21:18:48.622550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.622553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.622559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c3b550) 00:25:47.274 [2024-12-05 21:18:48.622565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.274 [2024-12-05 21:18:48.622571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.622575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.622578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c3b550) 00:25:47.274 [2024-12-05 21:18:48.622584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.274 [2024-12-05 21:18:48.622590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.622594] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.622597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3b550) 00:25:47.274 [2024-12-05 21:18:48.622603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.274 [2024-12-05 21:18:48.622608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:47.274 [2024-12-05 21:18:48.622618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:47.274 [2024-12-05 21:18:48.622625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.622628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3b550) 00:25:47.274 [2024-12-05 21:18:48.622635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.274 [2024-12-05 21:18:48.622647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d100, cid 0, qid 0 00:25:47.274 [2024-12-05 21:18:48.622652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d280, cid 1, qid 0 00:25:47.274 [2024-12-05 21:18:48.622657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d400, cid 2, qid 0 00:25:47.274 [2024-12-05 21:18:48.622662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d580, cid 3, qid 0 00:25:47.274 [2024-12-05 21:18:48.622667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d700, cid 4, qid 0 00:25:47.274 [2024-12-05 21:18:48.622904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.274 [2024-12-05 21:18:48.622911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.274 [2024-12-05 21:18:48.622915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.622919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d700) on tqpair=0x1c3b550 00:25:47.274 [2024-12-05 21:18:48.622924] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:47.274 [2024-12-05 21:18:48.622929] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:47.274 [2024-12-05 21:18:48.622939] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.622943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3b550) 00:25:47.274 [2024-12-05 21:18:48.622950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.274 [2024-12-05 21:18:48.622960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d700, cid 4, qid 0 00:25:47.274 [2024-12-05 21:18:48.623152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.274 [2024-12-05 21:18:48.623158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.274 [2024-12-05 21:18:48.623162] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.623167] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3b550): datao=0, datal=4096, cccid=4 00:25:47.274 [2024-12-05 21:18:48.623172] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9d700) on tqpair(0x1c3b550): expected_datao=0, payload_size=4096 00:25:47.274 [2024-12-05 21:18:48.623176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.623187] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.623191] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.666873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.274 [2024-12-05 21:18:48.666887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.274 [2024-12-05 21:18:48.666890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.666895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d700) on tqpair=0x1c3b550 00:25:47.274 [2024-12-05 21:18:48.666908] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:47.274 [2024-12-05 21:18:48.666933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.666937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3b550) 00:25:47.274 [2024-12-05 21:18:48.666945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.274 [2024-12-05 21:18:48.666952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.666956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.666960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3b550) 00:25:47.274 [2024-12-05 21:18:48.666966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.274 [2024-12-05 21:18:48.666982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d700, cid 4, qid 0 00:25:47.274 [2024-12-05 21:18:48.666988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d880, cid 5, qid 0 00:25:47.274 [2024-12-05 21:18:48.667233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.274 [2024-12-05 21:18:48.667239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.274 [2024-12-05 21:18:48.667243] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.667247] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3b550): datao=0, datal=1024, cccid=4 00:25:47.274 [2024-12-05 21:18:48.667251] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9d700) on tqpair(0x1c3b550): expected_datao=0, payload_size=1024 00:25:47.274 [2024-12-05 21:18:48.667256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.667263] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.667266] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.667272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.274 [2024-12-05 21:18:48.667278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.274 [2024-12-05 21:18:48.667281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.274 [2024-12-05 21:18:48.667285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d880) on tqpair=0x1c3b550 00:25:47.540 [2024-12-05 21:18:48.708063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.540 [2024-12-05 21:18:48.708074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.540 [2024-12-05 21:18:48.708078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.540 [2024-12-05 21:18:48.708082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d700) on tqpair=0x1c3b550 00:25:47.540 [2024-12-05 21:18:48.708093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.540 [2024-12-05 21:18:48.708097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3b550) 00:25:47.540 [2024-12-05 21:18:48.708107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.540 [2024-12-05 21:18:48.708122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d700, cid 4, qid 0 00:25:47.540 [2024-12-05 21:18:48.708401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.540 [2024-12-05 21:18:48.708408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.540 [2024-12-05 21:18:48.708411] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.540 [2024-12-05 21:18:48.708415] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3b550): datao=0, datal=3072, cccid=4 00:25:47.540 [2024-12-05 21:18:48.708419] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9d700) on tqpair(0x1c3b550): expected_datao=0, payload_size=3072 00:25:47.540 [2024-12-05 21:18:48.708424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.540 [2024-12-05 21:18:48.708431] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.540 [2024-12-05 21:18:48.708434] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.540 [2024-12-05 21:18:48.708552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.540 [2024-12-05 21:18:48.708559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.540 [2024-12-05 21:18:48.708562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.540 [2024-12-05 21:18:48.708566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d700) on tqpair=0x1c3b550 00:25:47.540 [2024-12-05 21:18:48.708575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.540 [2024-12-05 21:18:48.708579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3b550) 00:25:47.540 [2024-12-05 21:18:48.708585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.540 [2024-12-05 21:18:48.708598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d700, cid 4, qid 0 00:25:47.540 [2024-12-05 21:18:48.708795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.540 [2024-12-05 21:18:48.708802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.540 [2024-12-05 21:18:48.708805] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.540 [2024-12-05 21:18:48.708809] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3b550): datao=0, datal=8, cccid=4 00:25:47.540 [2024-12-05 21:18:48.708813] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9d700) on tqpair(0x1c3b550): expected_datao=0, payload_size=8 00:25:47.540 [2024-12-05 21:18:48.708818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.541 [2024-12-05 21:18:48.708824] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.541 [2024-12-05 21:18:48.708828] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.541 [2024-12-05 21:18:48.749050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.541 [2024-12-05 21:18:48.749060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.541 [2024-12-05 21:18:48.749063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.541 [2024-12-05 21:18:48.749067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d700) on tqpair=0x1c3b550 00:25:47.541 ===================================================== 00:25:47.541 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:47.541 ===================================================== 00:25:47.541 Controller Capabilities/Features 00:25:47.541 ================================ 00:25:47.541 Vendor ID: 0000 00:25:47.541 Subsystem Vendor ID: 0000 00:25:47.541 Serial Number: .................... 00:25:47.541 Model Number: ........................................ 00:25:47.541 Firmware Version: 25.01 00:25:47.541 Recommended Arb Burst: 0 00:25:47.541 IEEE OUI Identifier: 00 00 00 00:25:47.541 Multi-path I/O 00:25:47.541 May have multiple subsystem ports: No 00:25:47.541 May have multiple controllers: No 00:25:47.541 Associated with SR-IOV VF: No 00:25:47.541 Max Data Transfer Size: 131072 00:25:47.541 Max Number of Namespaces: 0 00:25:47.541 Max Number of I/O Queues: 1024 00:25:47.541 NVMe Specification Version (VS): 1.3 00:25:47.541 NVMe Specification Version (Identify): 1.3 00:25:47.541 Maximum Queue Entries: 128 00:25:47.541 Contiguous Queues Required: Yes 00:25:47.541 Arbitration Mechanisms Supported 00:25:47.541 Weighted Round Robin: Not Supported 00:25:47.541 Vendor Specific: Not Supported 00:25:47.541 Reset Timeout: 15000 ms 00:25:47.541 Doorbell Stride: 4 bytes 00:25:47.541 NVM Subsystem Reset: Not Supported 00:25:47.541 Command Sets Supported 00:25:47.541 NVM Command Set: Supported 00:25:47.541 Boot Partition: Not Supported 00:25:47.541 Memory Page Size Minimum: 4096 bytes 00:25:47.541 Memory Page Size Maximum: 4096 bytes 00:25:47.541 Persistent Memory Region: Not Supported 00:25:47.541 Optional Asynchronous Events Supported 00:25:47.541 Namespace Attribute Notices: Not Supported 00:25:47.541 Firmware Activation Notices: Not Supported 00:25:47.541 ANA Change Notices: Not Supported 00:25:47.541 PLE Aggregate Log Change Notices: Not Supported 00:25:47.541 LBA Status Info Alert Notices: Not Supported 00:25:47.541 EGE Aggregate Log Change Notices: Not Supported 00:25:47.541 Normal NVM Subsystem Shutdown event: Not Supported 00:25:47.541 Zone Descriptor Change Notices: Not Supported 00:25:47.541 Discovery Log Change Notices: Supported 00:25:47.541 Controller Attributes 00:25:47.541 128-bit Host Identifier: Not Supported 00:25:47.541 Non-Operational Permissive Mode: Not Supported 00:25:47.541 NVM Sets: Not Supported 00:25:47.541 Read Recovery Levels: Not Supported 00:25:47.541 Endurance Groups: Not Supported 00:25:47.541 Predictable Latency Mode: Not Supported 00:25:47.541 Traffic Based Keep ALive: Not Supported 00:25:47.541 Namespace Granularity: Not Supported 00:25:47.541 SQ Associations: Not Supported 00:25:47.541 UUID List: Not Supported 00:25:47.541 Multi-Domain Subsystem: Not Supported 00:25:47.541 Fixed Capacity Management: Not Supported 00:25:47.541 Variable Capacity Management: Not Supported 00:25:47.541 Delete Endurance Group: Not Supported 00:25:47.541 Delete NVM Set: Not Supported 00:25:47.541 Extended LBA Formats Supported: Not Supported 00:25:47.541 Flexible Data Placement Supported: Not Supported 00:25:47.541 00:25:47.541 Controller Memory Buffer Support 00:25:47.541 ================================ 00:25:47.541 Supported: No 00:25:47.541 00:25:47.541 Persistent Memory Region Support 00:25:47.541 ================================ 00:25:47.541 Supported: No 00:25:47.541 00:25:47.541 Admin Command Set Attributes 00:25:47.541 ============================ 00:25:47.541 Security Send/Receive: Not Supported 00:25:47.541 Format NVM: Not Supported 00:25:47.541 Firmware Activate/Download: Not Supported 00:25:47.541 Namespace Management: Not Supported 00:25:47.541 Device Self-Test: Not Supported 00:25:47.541 Directives: Not Supported 00:25:47.541 NVMe-MI: Not Supported 00:25:47.541 Virtualization Management: Not Supported 00:25:47.541 Doorbell Buffer Config: Not Supported 00:25:47.541 Get LBA Status Capability: Not Supported 00:25:47.541 Command & Feature Lockdown Capability: Not Supported 00:25:47.541 Abort Command Limit: 1 00:25:47.541 Async Event Request Limit: 4 00:25:47.541 Number of Firmware Slots: N/A 00:25:47.541 Firmware Slot 1 Read-Only: N/A 00:25:47.541 Firmware Activation Without Reset: N/A 00:25:47.541 Multiple Update Detection Support: N/A 00:25:47.541 Firmware Update Granularity: No Information Provided 00:25:47.541 Per-Namespace SMART Log: No 00:25:47.541 Asymmetric Namespace Access Log Page: Not Supported 00:25:47.541 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:47.541 Command Effects Log Page: Not Supported 00:25:47.541 Get Log Page Extended Data: Supported 00:25:47.541 Telemetry Log Pages: Not Supported 00:25:47.541 Persistent Event Log Pages: Not Supported 00:25:47.541 Supported Log Pages Log Page: May Support 00:25:47.541 Commands Supported & Effects Log Page: Not Supported 00:25:47.541 Feature Identifiers & Effects Log Page:May Support 00:25:47.541 NVMe-MI Commands & Effects Log Page: May Support 00:25:47.541 Data Area 4 for Telemetry Log: Not Supported 00:25:47.541 Error Log Page Entries Supported: 128 00:25:47.541 Keep Alive: Not Supported 00:25:47.541 00:25:47.542 NVM Command Set Attributes 00:25:47.542 ========================== 00:25:47.542 Submission Queue Entry Size 00:25:47.542 Max: 1 00:25:47.542 Min: 1 00:25:47.542 Completion Queue Entry Size 00:25:47.542 Max: 1 00:25:47.542 Min: 1 00:25:47.542 Number of Namespaces: 0 00:25:47.542 Compare Command: Not Supported 00:25:47.542 Write Uncorrectable Command: Not Supported 00:25:47.542 Dataset Management Command: Not Supported 00:25:47.542 Write Zeroes Command: Not Supported 00:25:47.542 Set Features Save Field: Not Supported 00:25:47.542 Reservations: Not Supported 00:25:47.542 Timestamp: Not Supported 00:25:47.542 Copy: Not Supported 00:25:47.542 Volatile Write Cache: Not Present 00:25:47.542 Atomic Write Unit (Normal): 1 00:25:47.542 Atomic Write Unit (PFail): 1 00:25:47.542 Atomic Compare & Write Unit: 1 00:25:47.542 Fused Compare & Write: Supported 00:25:47.542 Scatter-Gather List 00:25:47.542 SGL Command Set: Supported 00:25:47.542 SGL Keyed: Supported 00:25:47.542 SGL Bit Bucket Descriptor: Not Supported 00:25:47.542 SGL Metadata Pointer: Not Supported 00:25:47.542 Oversized SGL: Not Supported 00:25:47.542 SGL Metadata Address: Not Supported 00:25:47.542 SGL Offset: Supported 00:25:47.542 Transport SGL Data Block: Not Supported 00:25:47.542 Replay Protected Memory Block: Not Supported 00:25:47.542 00:25:47.542 Firmware Slot Information 00:25:47.542 ========================= 00:25:47.542 Active slot: 0 00:25:47.542 00:25:47.542 00:25:47.542 Error Log 00:25:47.542 ========= 00:25:47.542 00:25:47.542 Active Namespaces 00:25:47.542 ================= 00:25:47.542 Discovery Log Page 00:25:47.542 ================== 00:25:47.542 Generation Counter: 2 00:25:47.542 Number of Records: 2 00:25:47.542 Record Format: 0 00:25:47.542 00:25:47.542 Discovery Log Entry 0 00:25:47.542 ---------------------- 00:25:47.542 Transport Type: 3 (TCP) 00:25:47.542 Address Family: 1 (IPv4) 00:25:47.542 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:47.542 Entry Flags: 00:25:47.542 Duplicate Returned Information: 1 00:25:47.542 Explicit Persistent Connection Support for Discovery: 1 00:25:47.542 Transport Requirements: 00:25:47.542 Secure Channel: Not Required 00:25:47.542 Port ID: 0 (0x0000) 00:25:47.542 Controller ID: 65535 (0xffff) 00:25:47.542 Admin Max SQ Size: 128 00:25:47.542 Transport Service Identifier: 4420 00:25:47.542 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:47.542 Transport Address: 10.0.0.2 00:25:47.542 Discovery Log Entry 1 00:25:47.542 ---------------------- 00:25:47.542 Transport Type: 3 (TCP) 00:25:47.542 Address Family: 1 (IPv4) 00:25:47.542 Subsystem Type: 2 (NVM Subsystem) 00:25:47.542 Entry Flags: 00:25:47.542 Duplicate Returned Information: 0 00:25:47.542 Explicit Persistent Connection Support for Discovery: 0 00:25:47.542 Transport Requirements: 00:25:47.542 Secure Channel: Not Required 00:25:47.542 Port ID: 0 (0x0000) 00:25:47.542 Controller ID: 65535 (0xffff) 00:25:47.542 Admin Max SQ Size: 128 00:25:47.542 Transport Service Identifier: 4420 00:25:47.542 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:47.542 Transport Address: 10.0.0.2 [2024-12-05 21:18:48.749153] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:47.542 [2024-12-05 21:18:48.749164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d100) on tqpair=0x1c3b550 00:25:47.542 [2024-12-05 21:18:48.749170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.542 [2024-12-05 21:18:48.749176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d280) on tqpair=0x1c3b550 00:25:47.542 [2024-12-05 21:18:48.749181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.542 [2024-12-05 21:18:48.749188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d400) on tqpair=0x1c3b550 00:25:47.542 [2024-12-05 21:18:48.749192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.542 [2024-12-05 21:18:48.749197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d580) on tqpair=0x1c3b550 00:25:47.542 [2024-12-05 21:18:48.749202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.542 [2024-12-05 21:18:48.749212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.542 [2024-12-05 21:18:48.749216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.542 [2024-12-05 21:18:48.749220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3b550) 00:25:47.542 [2024-12-05 21:18:48.749227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.542 [2024-12-05 21:18:48.749240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d580, cid 3, qid 0 00:25:47.542 [2024-12-05 21:18:48.749346] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.542 [2024-12-05 21:18:48.749353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.542 [2024-12-05 21:18:48.749356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.542 [2024-12-05 21:18:48.749360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d580) on tqpair=0x1c3b550 00:25:47.542 [2024-12-05 21:18:48.749367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.542 [2024-12-05 21:18:48.749371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.542 [2024-12-05 21:18:48.749374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3b550) 00:25:47.542 [2024-12-05 21:18:48.749381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.542 [2024-12-05 21:18:48.749394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d580, cid 3, qid 0 00:25:47.542 [2024-12-05 21:18:48.749584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.542 [2024-12-05 21:18:48.749590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.542 [2024-12-05 21:18:48.749594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.542 [2024-12-05 21:18:48.749598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d580) on tqpair=0x1c3b550 00:25:47.542 [2024-12-05 21:18:48.749602] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:47.542 [2024-12-05 21:18:48.749607] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:47.542 [2024-12-05 21:18:48.749616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.542 [2024-12-05 21:18:48.749620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.542 [2024-12-05 21:18:48.749624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3b550) 00:25:47.542 [2024-12-05 21:18:48.749630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.543 [2024-12-05 21:18:48.749640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d580, cid 3, qid 0 00:25:47.543 [2024-12-05 21:18:48.749830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.543 [2024-12-05 21:18:48.749836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.543 [2024-12-05 21:18:48.749840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.749843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d580) on tqpair=0x1c3b550 00:25:47.543 [2024-12-05 21:18:48.749853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.749857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.753867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3b550) 00:25:47.543 [2024-12-05 21:18:48.753876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.543 [2024-12-05 21:18:48.753888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9d580, cid 3, qid 0 00:25:47.543 [2024-12-05 21:18:48.754095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.543 [2024-12-05 21:18:48.754101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.543 [2024-12-05 21:18:48.754105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.754109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9d580) on tqpair=0x1c3b550 00:25:47.543 [2024-12-05 21:18:48.754116] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:25:47.543 00:25:47.543 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:47.543 [2024-12-05 21:18:48.798473] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:25:47.543 [2024-12-05 21:18:48.798538] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2201105 ] 00:25:47.543 [2024-12-05 21:18:48.851078] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:47.543 [2024-12-05 21:18:48.851128] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:47.543 [2024-12-05 21:18:48.851133] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:47.543 [2024-12-05 21:18:48.851146] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:47.543 [2024-12-05 21:18:48.851154] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:47.543 [2024-12-05 21:18:48.855063] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:47.543 [2024-12-05 21:18:48.855092] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x205f550 0 00:25:47.543 [2024-12-05 21:18:48.862875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:47.543 [2024-12-05 21:18:48.862887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:47.543 [2024-12-05 21:18:48.862891] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:47.543 [2024-12-05 21:18:48.862895] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:47.543 [2024-12-05 21:18:48.862922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.862927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.862931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205f550) 00:25:47.543 [2024-12-05 21:18:48.862942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:47.543 [2024-12-05 21:18:48.862959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1100, cid 0, qid 0 00:25:47.543 [2024-12-05 21:18:48.870871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.543 [2024-12-05 21:18:48.870881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.543 [2024-12-05 21:18:48.870884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.870892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1100) on tqpair=0x205f550 00:25:47.543 [2024-12-05 21:18:48.870904] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:47.543 [2024-12-05 21:18:48.870910] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:47.543 [2024-12-05 21:18:48.870916] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:47.543 [2024-12-05 21:18:48.870927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.870931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.870935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205f550) 00:25:47.543 [2024-12-05 21:18:48.870942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.543 [2024-12-05 21:18:48.870956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1100, cid 0, qid 0 00:25:47.543 [2024-12-05 21:18:48.871130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.543 [2024-12-05 21:18:48.871136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.543 [2024-12-05 21:18:48.871140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.871144] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1100) on tqpair=0x205f550 00:25:47.543 [2024-12-05 21:18:48.871149] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:47.543 [2024-12-05 21:18:48.871156] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:47.543 [2024-12-05 21:18:48.871163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.871167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.871171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205f550) 00:25:47.543 [2024-12-05 21:18:48.871178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.543 [2024-12-05 21:18:48.871188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1100, cid 0, qid 0 00:25:47.543 [2024-12-05 21:18:48.871343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.543 [2024-12-05 21:18:48.871350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.543 [2024-12-05 21:18:48.871354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.871358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1100) on tqpair=0x205f550 00:25:47.543 [2024-12-05 21:18:48.871364] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:47.543 [2024-12-05 21:18:48.871372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:47.543 [2024-12-05 21:18:48.871378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.871382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.871386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205f550) 00:25:47.543 [2024-12-05 21:18:48.871393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.543 [2024-12-05 21:18:48.871403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1100, cid 0, qid 0 00:25:47.543 [2024-12-05 21:18:48.871558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.543 [2024-12-05 21:18:48.871565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.543 [2024-12-05 21:18:48.871568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.543 [2024-12-05 21:18:48.871573] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1100) on tqpair=0x205f550 00:25:47.544 [2024-12-05 21:18:48.871581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:47.544 [2024-12-05 21:18:48.871591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.871595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.871599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205f550) 00:25:47.544 [2024-12-05 21:18:48.871606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.544 [2024-12-05 21:18:48.871617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1100, cid 0, qid 0 00:25:47.544 [2024-12-05 21:18:48.871781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.544 [2024-12-05 21:18:48.871788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.544 [2024-12-05 21:18:48.871792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.871796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1100) on tqpair=0x205f550 00:25:47.544 [2024-12-05 21:18:48.871801] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:47.544 [2024-12-05 21:18:48.871807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:47.544 [2024-12-05 21:18:48.871815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:47.544 [2024-12-05 21:18:48.871924] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:47.544 [2024-12-05 21:18:48.871930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:47.544 [2024-12-05 21:18:48.871939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.871943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.871948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205f550) 00:25:47.544 [2024-12-05 21:18:48.871955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.544 [2024-12-05 21:18:48.871967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1100, cid 0, qid 0 00:25:47.544 [2024-12-05 21:18:48.872127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.544 [2024-12-05 21:18:48.872133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.544 [2024-12-05 21:18:48.872136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.872140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1100) on tqpair=0x205f550 00:25:47.544 [2024-12-05 21:18:48.872145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:47.544 [2024-12-05 21:18:48.872155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.872159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.872162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205f550) 00:25:47.544 [2024-12-05 21:18:48.872169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.544 [2024-12-05 21:18:48.872179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1100, cid 0, qid 0 00:25:47.544 [2024-12-05 21:18:48.872391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.544 [2024-12-05 21:18:48.872397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.544 [2024-12-05 21:18:48.872400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.872408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1100) on tqpair=0x205f550 00:25:47.544 [2024-12-05 21:18:48.872412] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:47.544 [2024-12-05 21:18:48.872417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:47.544 [2024-12-05 21:18:48.872425] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:47.544 [2024-12-05 21:18:48.872434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:47.544 [2024-12-05 21:18:48.872443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.872446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205f550) 00:25:47.544 [2024-12-05 21:18:48.872453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.544 [2024-12-05 21:18:48.872464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1100, cid 0, qid 0 00:25:47.544 [2024-12-05 21:18:48.872648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.544 [2024-12-05 21:18:48.872654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.544 [2024-12-05 21:18:48.872658] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.872662] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205f550): datao=0, datal=4096, cccid=0 00:25:47.544 [2024-12-05 21:18:48.872666] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c1100) on tqpair(0x205f550): expected_datao=0, payload_size=4096 00:25:47.544 [2024-12-05 21:18:48.872671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.872686] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.872690] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.913033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.544 [2024-12-05 21:18:48.913043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.544 [2024-12-05 21:18:48.913046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.913050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1100) on tqpair=0x205f550 00:25:47.544 [2024-12-05 21:18:48.913057] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:47.544 [2024-12-05 21:18:48.913065] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:47.544 [2024-12-05 21:18:48.913070] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:47.544 [2024-12-05 21:18:48.913074] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:47.544 [2024-12-05 21:18:48.913078] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:47.544 [2024-12-05 21:18:48.913083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:47.544 [2024-12-05 21:18:48.913092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:47.544 [2024-12-05 21:18:48.913099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.913103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.913106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205f550) 00:25:47.544 [2024-12-05 21:18:48.913113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:47.544 [2024-12-05 21:18:48.913130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1100, cid 0, qid 0 00:25:47.544 [2024-12-05 21:18:48.913331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.544 [2024-12-05 21:18:48.913338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.544 [2024-12-05 21:18:48.913341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.544 [2024-12-05 21:18:48.913345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1100) on tqpair=0x205f550 00:25:47.545 [2024-12-05 21:18:48.913352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.913356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.913359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x205f550) 00:25:47.545 [2024-12-05 21:18:48.913366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.545 [2024-12-05 21:18:48.913372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.913376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.913379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x205f550) 00:25:47.545 [2024-12-05 21:18:48.913385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.545 [2024-12-05 21:18:48.913391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.913395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.913399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x205f550) 00:25:47.545 [2024-12-05 21:18:48.913405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.545 [2024-12-05 21:18:48.913411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.913414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.913418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205f550) 00:25:47.545 [2024-12-05 21:18:48.913424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.545 [2024-12-05 21:18:48.913429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:47.545 [2024-12-05 21:18:48.913439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:47.545 [2024-12-05 21:18:48.913445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.913449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205f550) 00:25:47.545 [2024-12-05 21:18:48.913456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.545 [2024-12-05 21:18:48.913468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1100, cid 0, qid 0 00:25:47.545 [2024-12-05 21:18:48.913473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1280, cid 1, qid 0 00:25:47.545 [2024-12-05 21:18:48.913478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1400, cid 2, qid 0 00:25:47.545 [2024-12-05 21:18:48.913483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1580, cid 3, qid 0 00:25:47.545 [2024-12-05 21:18:48.913488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1700, cid 4, qid 0 00:25:47.545 [2024-12-05 21:18:48.913672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.545 [2024-12-05 21:18:48.913679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.545 [2024-12-05 21:18:48.913682] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.913688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1700) on tqpair=0x205f550 00:25:47.545 [2024-12-05 21:18:48.913692] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:47.545 [2024-12-05 21:18:48.913698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:47.545 [2024-12-05 21:18:48.913706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:47.545 [2024-12-05 21:18:48.913712] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:47.545 [2024-12-05 21:18:48.913718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.913722] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.913725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205f550) 00:25:47.545 [2024-12-05 21:18:48.913732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:47.545 [2024-12-05 21:18:48.913742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1700, cid 4, qid 0 00:25:47.545 [2024-12-05 21:18:48.913888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.545 [2024-12-05 21:18:48.913895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.545 [2024-12-05 21:18:48.913898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.913902] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1700) on tqpair=0x205f550 00:25:47.545 [2024-12-05 21:18:48.913967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:47.545 [2024-12-05 21:18:48.913985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:47.545 [2024-12-05 21:18:48.913992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.913996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205f550) 00:25:47.545 [2024-12-05 21:18:48.914003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.545 [2024-12-05 21:18:48.914014] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1700, cid 4, qid 0 00:25:47.545 [2024-12-05 21:18:48.914189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.545 [2024-12-05 21:18:48.914196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.545 [2024-12-05 21:18:48.914200] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.914203] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205f550): datao=0, datal=4096, cccid=4 00:25:47.545 [2024-12-05 21:18:48.914208] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c1700) on tqpair(0x205f550): expected_datao=0, payload_size=4096 00:25:47.545 [2024-12-05 21:18:48.914212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.914219] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.914223] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.914374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.545 [2024-12-05 21:18:48.914381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.545 [2024-12-05 21:18:48.914384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.914388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1700) on tqpair=0x205f550 00:25:47.545 [2024-12-05 21:18:48.914396] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:47.545 [2024-12-05 21:18:48.914411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:47.545 [2024-12-05 21:18:48.914420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:47.545 [2024-12-05 21:18:48.914427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.914431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205f550) 00:25:47.545 [2024-12-05 21:18:48.914438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.545 [2024-12-05 21:18:48.914448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1700, cid 4, qid 0 00:25:47.545 [2024-12-05 21:18:48.914667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.545 [2024-12-05 21:18:48.914673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.545 [2024-12-05 21:18:48.914677] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.914680] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205f550): datao=0, datal=4096, cccid=4 00:25:47.545 [2024-12-05 21:18:48.914685] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c1700) on tqpair(0x205f550): expected_datao=0, payload_size=4096 00:25:47.545 [2024-12-05 21:18:48.914689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.914696] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.914699] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.914814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.545 [2024-12-05 21:18:48.914820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.545 [2024-12-05 21:18:48.914824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.914827] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1700) on tqpair=0x205f550 00:25:47.545 [2024-12-05 21:18:48.914839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:47.545 [2024-12-05 21:18:48.914848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:47.545 [2024-12-05 21:18:48.914855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.545 [2024-12-05 21:18:48.914859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205f550) 00:25:47.545 [2024-12-05 21:18:48.918872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.545 [2024-12-05 21:18:48.918885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1700, cid 4, qid 0 00:25:47.546 [2024-12-05 21:18:48.919084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.546 [2024-12-05 21:18:48.919091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.546 [2024-12-05 21:18:48.919094] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.919098] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205f550): datao=0, datal=4096, cccid=4 00:25:47.546 [2024-12-05 21:18:48.919102] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c1700) on tqpair(0x205f550): expected_datao=0, payload_size=4096 00:25:47.546 [2024-12-05 21:18:48.919107] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.919113] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.919117] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.919275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.546 [2024-12-05 21:18:48.919281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.546 [2024-12-05 21:18:48.919287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.919291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1700) on tqpair=0x205f550 00:25:47.546 [2024-12-05 21:18:48.919299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:47.546 [2024-12-05 21:18:48.919307] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:47.546 [2024-12-05 21:18:48.919316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:47.546 [2024-12-05 21:18:48.919323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:47.546 [2024-12-05 21:18:48.919328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:47.546 [2024-12-05 21:18:48.919334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:47.546 [2024-12-05 21:18:48.919339] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:47.546 [2024-12-05 21:18:48.919344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:47.546 [2024-12-05 21:18:48.919349] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:47.546 [2024-12-05 21:18:48.919363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.919367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205f550) 00:25:47.546 [2024-12-05 21:18:48.919373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.546 [2024-12-05 21:18:48.919380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.919384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.919387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x205f550) 00:25:47.546 [2024-12-05 21:18:48.919394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.546 [2024-12-05 21:18:48.919407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1700, cid 4, qid 0 00:25:47.546 [2024-12-05 21:18:48.919412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1880, cid 5, qid 0 00:25:47.546 [2024-12-05 21:18:48.919613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.546 [2024-12-05 21:18:48.919619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.546 [2024-12-05 21:18:48.919622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.919626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1700) on tqpair=0x205f550 00:25:47.546 [2024-12-05 21:18:48.919633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.546 [2024-12-05 21:18:48.919639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.546 [2024-12-05 21:18:48.919642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.919646] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1880) on tqpair=0x205f550 00:25:47.546 [2024-12-05 21:18:48.919655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.919659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x205f550) 00:25:47.546 [2024-12-05 21:18:48.919666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.546 [2024-12-05 21:18:48.919675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1880, cid 5, qid 0 00:25:47.546 [2024-12-05 21:18:48.919830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.546 [2024-12-05 21:18:48.919836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.546 [2024-12-05 21:18:48.919840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.919844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1880) on tqpair=0x205f550 00:25:47.546 [2024-12-05 21:18:48.919853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.919857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x205f550) 00:25:47.546 [2024-12-05 21:18:48.919868] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.546 [2024-12-05 21:18:48.919878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1880, cid 5, qid 0 00:25:47.546 [2024-12-05 21:18:48.920061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.546 [2024-12-05 21:18:48.920067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.546 [2024-12-05 21:18:48.920070] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.920074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1880) on tqpair=0x205f550 00:25:47.546 [2024-12-05 21:18:48.920083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.920087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x205f550) 00:25:47.546 [2024-12-05 21:18:48.920094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.546 [2024-12-05 21:18:48.920104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1880, cid 5, qid 0 00:25:47.546 [2024-12-05 21:18:48.920307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.546 [2024-12-05 21:18:48.920314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.546 [2024-12-05 21:18:48.920317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.920321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1880) on tqpair=0x205f550 00:25:47.546 [2024-12-05 21:18:48.920336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.920341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x205f550) 00:25:47.546 [2024-12-05 21:18:48.920347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.546 [2024-12-05 21:18:48.920355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.920359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x205f550) 00:25:47.546 [2024-12-05 21:18:48.920365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.546 [2024-12-05 21:18:48.920372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.920376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x205f550) 00:25:47.546 [2024-12-05 21:18:48.920382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.546 [2024-12-05 21:18:48.920390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.546 [2024-12-05 21:18:48.920393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x205f550) 00:25:47.546 [2024-12-05 21:18:48.920400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.546 [2024-12-05 21:18:48.920411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1880, cid 5, qid 0 00:25:47.546 [2024-12-05 21:18:48.920418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1700, cid 4, qid 0 00:25:47.546 [2024-12-05 21:18:48.920423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1a00, cid 6, qid 0 00:25:47.546 [2024-12-05 21:18:48.920427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1b80, cid 7, qid 0 00:25:47.546 [2024-12-05 21:18:48.920661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.546 [2024-12-05 21:18:48.920668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.546 [2024-12-05 21:18:48.920671] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.920675] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205f550): datao=0, datal=8192, cccid=5 00:25:47.547 [2024-12-05 21:18:48.920679] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c1880) on tqpair(0x205f550): expected_datao=0, payload_size=8192 00:25:47.547 [2024-12-05 21:18:48.920684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.920770] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.920774] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.920780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.547 [2024-12-05 21:18:48.920786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.547 [2024-12-05 21:18:48.920789] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.920793] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205f550): datao=0, datal=512, cccid=4 00:25:47.547 [2024-12-05 21:18:48.920797] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c1700) on tqpair(0x205f550): expected_datao=0, payload_size=512 00:25:47.547 [2024-12-05 21:18:48.920802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.920808] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.920812] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.920817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.547 [2024-12-05 21:18:48.920823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.547 [2024-12-05 21:18:48.920826] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.920830] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205f550): datao=0, datal=512, cccid=6 00:25:47.547 [2024-12-05 21:18:48.920834] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c1a00) on tqpair(0x205f550): expected_datao=0, payload_size=512 00:25:47.547 [2024-12-05 21:18:48.920839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.920845] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.920848] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.920854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:47.547 [2024-12-05 21:18:48.920860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:47.547 [2024-12-05 21:18:48.920867] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.920871] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x205f550): datao=0, datal=4096, cccid=7 00:25:47.547 [2024-12-05 21:18:48.920875] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c1b80) on tqpair(0x205f550): expected_datao=0, payload_size=4096 00:25:47.547 [2024-12-05 21:18:48.920880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.920920] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.920924] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.921064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.547 [2024-12-05 21:18:48.921070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.547 [2024-12-05 21:18:48.921074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.921079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1880) on tqpair=0x205f550 00:25:47.547 [2024-12-05 21:18:48.921091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.547 [2024-12-05 21:18:48.921097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.547 [2024-12-05 21:18:48.921100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.921104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1700) on tqpair=0x205f550 00:25:47.547 [2024-12-05 21:18:48.921114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.547 [2024-12-05 21:18:48.921120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.547 [2024-12-05 21:18:48.921124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.921127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1a00) on tqpair=0x205f550 00:25:47.547 [2024-12-05 21:18:48.921135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.547 [2024-12-05 21:18:48.921140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.547 [2024-12-05 21:18:48.921144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.547 [2024-12-05 21:18:48.921148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1b80) on tqpair=0x205f550 00:25:47.547 ===================================================== 00:25:47.547 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:47.547 ===================================================== 00:25:47.547 Controller Capabilities/Features 00:25:47.547 ================================ 00:25:47.547 Vendor ID: 8086 00:25:47.547 Subsystem Vendor ID: 8086 00:25:47.547 Serial Number: SPDK00000000000001 00:25:47.547 Model Number: SPDK bdev Controller 00:25:47.547 Firmware Version: 25.01 00:25:47.547 Recommended Arb Burst: 6 00:25:47.547 IEEE OUI Identifier: e4 d2 5c 00:25:47.547 Multi-path I/O 00:25:47.547 May have multiple subsystem ports: Yes 00:25:47.547 May have multiple controllers: Yes 00:25:47.547 Associated with SR-IOV VF: No 00:25:47.547 Max Data Transfer Size: 131072 00:25:47.547 Max Number of Namespaces: 32 00:25:47.547 Max Number of I/O Queues: 127 00:25:47.547 NVMe Specification Version (VS): 1.3 00:25:47.547 NVMe Specification Version (Identify): 1.3 00:25:47.547 Maximum Queue Entries: 128 00:25:47.547 Contiguous Queues Required: Yes 00:25:47.547 Arbitration Mechanisms Supported 00:25:47.547 Weighted Round Robin: Not Supported 00:25:47.547 Vendor Specific: Not Supported 00:25:47.547 Reset Timeout: 15000 ms 00:25:47.547 Doorbell Stride: 4 bytes 00:25:47.547 NVM Subsystem Reset: Not Supported 00:25:47.547 Command Sets Supported 00:25:47.547 NVM Command Set: Supported 00:25:47.547 Boot Partition: Not Supported 00:25:47.547 Memory Page Size Minimum: 4096 bytes 00:25:47.547 Memory Page Size Maximum: 4096 bytes 00:25:47.547 Persistent Memory Region: Not Supported 00:25:47.547 Optional Asynchronous Events Supported 00:25:47.547 Namespace Attribute Notices: Supported 00:25:47.547 Firmware Activation Notices: Not Supported 00:25:47.547 ANA Change Notices: Not Supported 00:25:47.547 PLE Aggregate Log Change Notices: Not Supported 00:25:47.547 LBA Status Info Alert Notices: Not Supported 00:25:47.547 EGE Aggregate Log Change Notices: Not Supported 00:25:47.547 Normal NVM Subsystem Shutdown event: Not Supported 00:25:47.547 Zone Descriptor Change Notices: Not Supported 00:25:47.547 Discovery Log Change Notices: Not Supported 00:25:47.547 Controller Attributes 00:25:47.547 128-bit Host Identifier: Supported 00:25:47.547 Non-Operational Permissive Mode: Not Supported 00:25:47.547 NVM Sets: Not Supported 00:25:47.547 Read Recovery Levels: Not Supported 00:25:47.547 Endurance Groups: Not Supported 00:25:47.547 Predictable Latency Mode: Not Supported 00:25:47.547 Traffic Based Keep ALive: Not Supported 00:25:47.547 Namespace Granularity: Not Supported 00:25:47.547 SQ Associations: Not Supported 00:25:47.547 UUID List: Not Supported 00:25:47.547 Multi-Domain Subsystem: Not Supported 00:25:47.547 Fixed Capacity Management: Not Supported 00:25:47.547 Variable Capacity Management: Not Supported 00:25:47.547 Delete Endurance Group: Not Supported 00:25:47.547 Delete NVM Set: Not Supported 00:25:47.547 Extended LBA Formats Supported: Not Supported 00:25:47.547 Flexible Data Placement Supported: Not Supported 00:25:47.547 00:25:47.547 Controller Memory Buffer Support 00:25:47.547 ================================ 00:25:47.547 Supported: No 00:25:47.547 00:25:47.547 Persistent Memory Region Support 00:25:47.547 ================================ 00:25:47.547 Supported: No 00:25:47.547 00:25:47.547 Admin Command Set Attributes 00:25:47.547 ============================ 00:25:47.547 Security Send/Receive: Not Supported 00:25:47.547 Format NVM: Not Supported 00:25:47.547 Firmware Activate/Download: Not Supported 00:25:47.547 Namespace Management: Not Supported 00:25:47.547 Device Self-Test: Not Supported 00:25:47.547 Directives: Not Supported 00:25:47.547 NVMe-MI: Not Supported 00:25:47.547 Virtualization Management: Not Supported 00:25:47.547 Doorbell Buffer Config: Not Supported 00:25:47.547 Get LBA Status Capability: Not Supported 00:25:47.547 Command & Feature Lockdown Capability: Not Supported 00:25:47.547 Abort Command Limit: 4 00:25:47.547 Async Event Request Limit: 4 00:25:47.547 Number of Firmware Slots: N/A 00:25:47.547 Firmware Slot 1 Read-Only: N/A 00:25:47.547 Firmware Activation Without Reset: N/A 00:25:47.547 Multiple Update Detection Support: N/A 00:25:47.547 Firmware Update Granularity: No Information Provided 00:25:47.547 Per-Namespace SMART Log: No 00:25:47.547 Asymmetric Namespace Access Log Page: Not Supported 00:25:47.547 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:47.547 Command Effects Log Page: Supported 00:25:47.547 Get Log Page Extended Data: Supported 00:25:47.547 Telemetry Log Pages: Not Supported 00:25:47.547 Persistent Event Log Pages: Not Supported 00:25:47.547 Supported Log Pages Log Page: May Support 00:25:47.547 Commands Supported & Effects Log Page: Not Supported 00:25:47.547 Feature Identifiers & Effects Log Page:May Support 00:25:47.547 NVMe-MI Commands & Effects Log Page: May Support 00:25:47.547 Data Area 4 for Telemetry Log: Not Supported 00:25:47.547 Error Log Page Entries Supported: 128 00:25:47.547 Keep Alive: Supported 00:25:47.547 Keep Alive Granularity: 10000 ms 00:25:47.547 00:25:47.547 NVM Command Set Attributes 00:25:47.547 ========================== 00:25:47.548 Submission Queue Entry Size 00:25:47.548 Max: 64 00:25:47.548 Min: 64 00:25:47.548 Completion Queue Entry Size 00:25:47.548 Max: 16 00:25:47.548 Min: 16 00:25:47.548 Number of Namespaces: 32 00:25:47.548 Compare Command: Supported 00:25:47.548 Write Uncorrectable Command: Not Supported 00:25:47.548 Dataset Management Command: Supported 00:25:47.548 Write Zeroes Command: Supported 00:25:47.548 Set Features Save Field: Not Supported 00:25:47.548 Reservations: Supported 00:25:47.548 Timestamp: Not Supported 00:25:47.548 Copy: Supported 00:25:47.548 Volatile Write Cache: Present 00:25:47.548 Atomic Write Unit (Normal): 1 00:25:47.548 Atomic Write Unit (PFail): 1 00:25:47.548 Atomic Compare & Write Unit: 1 00:25:47.548 Fused Compare & Write: Supported 00:25:47.548 Scatter-Gather List 00:25:47.548 SGL Command Set: Supported 00:25:47.548 SGL Keyed: Supported 00:25:47.548 SGL Bit Bucket Descriptor: Not Supported 00:25:47.548 SGL Metadata Pointer: Not Supported 00:25:47.548 Oversized SGL: Not Supported 00:25:47.548 SGL Metadata Address: Not Supported 00:25:47.548 SGL Offset: Supported 00:25:47.548 Transport SGL Data Block: Not Supported 00:25:47.548 Replay Protected Memory Block: Not Supported 00:25:47.548 00:25:47.548 Firmware Slot Information 00:25:47.548 ========================= 00:25:47.548 Active slot: 1 00:25:47.548 Slot 1 Firmware Revision: 25.01 00:25:47.548 00:25:47.548 00:25:47.548 Commands Supported and Effects 00:25:47.548 ============================== 00:25:47.548 Admin Commands 00:25:47.548 -------------- 00:25:47.548 Get Log Page (02h): Supported 00:25:47.548 Identify (06h): Supported 00:25:47.548 Abort (08h): Supported 00:25:47.548 Set Features (09h): Supported 00:25:47.548 Get Features (0Ah): Supported 00:25:47.548 Asynchronous Event Request (0Ch): Supported 00:25:47.548 Keep Alive (18h): Supported 00:25:47.548 I/O Commands 00:25:47.548 ------------ 00:25:47.548 Flush (00h): Supported LBA-Change 00:25:47.548 Write (01h): Supported LBA-Change 00:25:47.548 Read (02h): Supported 00:25:47.548 Compare (05h): Supported 00:25:47.548 Write Zeroes (08h): Supported LBA-Change 00:25:47.548 Dataset Management (09h): Supported LBA-Change 00:25:47.548 Copy (19h): Supported LBA-Change 00:25:47.548 00:25:47.548 Error Log 00:25:47.548 ========= 00:25:47.548 00:25:47.548 Arbitration 00:25:47.548 =========== 00:25:47.548 Arbitration Burst: 1 00:25:47.548 00:25:47.548 Power Management 00:25:47.548 ================ 00:25:47.548 Number of Power States: 1 00:25:47.548 Current Power State: Power State #0 00:25:47.548 Power State #0: 00:25:47.548 Max Power: 0.00 W 00:25:47.548 Non-Operational State: Operational 00:25:47.548 Entry Latency: Not Reported 00:25:47.548 Exit Latency: Not Reported 00:25:47.548 Relative Read Throughput: 0 00:25:47.548 Relative Read Latency: 0 00:25:47.548 Relative Write Throughput: 0 00:25:47.548 Relative Write Latency: 0 00:25:47.548 Idle Power: Not Reported 00:25:47.548 Active Power: Not Reported 00:25:47.548 Non-Operational Permissive Mode: Not Supported 00:25:47.548 00:25:47.548 Health Information 00:25:47.548 ================== 00:25:47.548 Critical Warnings: 00:25:47.548 Available Spare Space: OK 00:25:47.548 Temperature: OK 00:25:47.548 Device Reliability: OK 00:25:47.548 Read Only: No 00:25:47.548 Volatile Memory Backup: OK 00:25:47.548 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:47.548 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:47.548 Available Spare: 0% 00:25:47.548 Available Spare Threshold: 0% 00:25:47.548 Life Percentage Used:[2024-12-05 21:18:48.921242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.921247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x205f550) 00:25:47.548 [2024-12-05 21:18:48.921254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.548 [2024-12-05 21:18:48.921266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1b80, cid 7, qid 0 00:25:47.548 [2024-12-05 21:18:48.921457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.548 [2024-12-05 21:18:48.921463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.548 [2024-12-05 21:18:48.921467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.921470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1b80) on tqpair=0x205f550 00:25:47.548 [2024-12-05 21:18:48.921502] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:47.548 [2024-12-05 21:18:48.921511] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1100) on tqpair=0x205f550 00:25:47.548 [2024-12-05 21:18:48.921517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.548 [2024-12-05 21:18:48.921523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1280) on tqpair=0x205f550 00:25:47.548 [2024-12-05 21:18:48.921527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.548 [2024-12-05 21:18:48.921532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1400) on tqpair=0x205f550 00:25:47.548 [2024-12-05 21:18:48.921537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.548 [2024-12-05 21:18:48.921542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1580) on tqpair=0x205f550 00:25:47.548 [2024-12-05 21:18:48.921547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.548 [2024-12-05 21:18:48.921555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.921558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.921562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205f550) 00:25:47.548 [2024-12-05 21:18:48.921569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.548 [2024-12-05 21:18:48.921580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1580, cid 3, qid 0 00:25:47.548 [2024-12-05 21:18:48.921754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.548 [2024-12-05 21:18:48.921761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.548 [2024-12-05 21:18:48.921765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.921769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1580) on tqpair=0x205f550 00:25:47.548 [2024-12-05 21:18:48.921775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.921779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.921783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205f550) 00:25:47.548 [2024-12-05 21:18:48.921789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.548 [2024-12-05 21:18:48.921802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1580, cid 3, qid 0 00:25:47.548 [2024-12-05 21:18:48.922032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.548 [2024-12-05 21:18:48.922040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.548 [2024-12-05 21:18:48.922043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.922047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1580) on tqpair=0x205f550 00:25:47.548 [2024-12-05 21:18:48.922052] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:47.548 [2024-12-05 21:18:48.922057] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:47.548 [2024-12-05 21:18:48.922066] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.922070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.922073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205f550) 00:25:47.548 [2024-12-05 21:18:48.922080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.548 [2024-12-05 21:18:48.922091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1580, cid 3, qid 0 00:25:47.548 [2024-12-05 21:18:48.922267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.548 [2024-12-05 21:18:48.922274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.548 [2024-12-05 21:18:48.922277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.922281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1580) on tqpair=0x205f550 00:25:47.548 [2024-12-05 21:18:48.922291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.922295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.922299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205f550) 00:25:47.548 [2024-12-05 21:18:48.922305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.548 [2024-12-05 21:18:48.922315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1580, cid 3, qid 0 00:25:47.548 [2024-12-05 21:18:48.922526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.548 [2024-12-05 21:18:48.922532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.548 [2024-12-05 21:18:48.922536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.922539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1580) on tqpair=0x205f550 00:25:47.548 [2024-12-05 21:18:48.922549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.922553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.548 [2024-12-05 21:18:48.922557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205f550) 00:25:47.548 [2024-12-05 21:18:48.922564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.549 [2024-12-05 21:18:48.922576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1580, cid 3, qid 0 00:25:47.549 [2024-12-05 21:18:48.922720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.549 [2024-12-05 21:18:48.922726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.549 [2024-12-05 21:18:48.922730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.549 [2024-12-05 21:18:48.922734] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1580) on tqpair=0x205f550 00:25:47.549 [2024-12-05 21:18:48.922743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:47.549 [2024-12-05 21:18:48.922747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:47.549 [2024-12-05 21:18:48.922751] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x205f550) 00:25:47.549 [2024-12-05 21:18:48.922758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.549 [2024-12-05 21:18:48.922767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c1580, cid 3, qid 0 00:25:47.549 [2024-12-05 21:18:48.926868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:47.549 [2024-12-05 21:18:48.926877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:47.549 [2024-12-05 21:18:48.926880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:47.549 [2024-12-05 21:18:48.926884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c1580) on tqpair=0x205f550 00:25:47.549 [2024-12-05 21:18:48.926892] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:25:47.549 0% 00:25:47.549 Data Units Read: 0 00:25:47.549 Data Units Written: 0 00:25:47.549 Host Read Commands: 0 00:25:47.549 Host Write Commands: 0 00:25:47.549 Controller Busy Time: 0 minutes 00:25:47.549 Power Cycles: 0 00:25:47.549 Power On Hours: 0 hours 00:25:47.549 Unsafe Shutdowns: 0 00:25:47.549 Unrecoverable Media Errors: 0 00:25:47.549 Lifetime Error Log Entries: 0 00:25:47.549 Warning Temperature Time: 0 minutes 00:25:47.549 Critical Temperature Time: 0 minutes 00:25:47.549 00:25:47.549 Number of Queues 00:25:47.549 ================ 00:25:47.549 Number of I/O Submission Queues: 127 00:25:47.549 Number of I/O Completion Queues: 127 00:25:47.549 00:25:47.549 Active Namespaces 00:25:47.549 ================= 00:25:47.549 Namespace ID:1 00:25:47.549 Error Recovery Timeout: Unlimited 00:25:47.549 Command Set Identifier: NVM (00h) 00:25:47.549 Deallocate: Supported 00:25:47.549 Deallocated/Unwritten Error: Not Supported 00:25:47.549 Deallocated Read Value: Unknown 00:25:47.549 Deallocate in Write Zeroes: Not Supported 00:25:47.549 Deallocated Guard Field: 0xFFFF 00:25:47.549 Flush: Supported 00:25:47.549 Reservation: Supported 00:25:47.549 Namespace Sharing Capabilities: Multiple Controllers 00:25:47.549 Size (in LBAs): 131072 (0GiB) 00:25:47.549 Capacity (in LBAs): 131072 (0GiB) 00:25:47.549 Utilization (in LBAs): 131072 (0GiB) 00:25:47.549 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:47.549 EUI64: ABCDEF0123456789 00:25:47.549 UUID: cbf48556-a6e0-438f-97a5-8e22729b361d 00:25:47.549 Thin Provisioning: Not Supported 00:25:47.549 Per-NS Atomic Units: Yes 00:25:47.549 Atomic Boundary Size (Normal): 0 00:25:47.549 Atomic Boundary Size (PFail): 0 00:25:47.549 Atomic Boundary Offset: 0 00:25:47.549 Maximum Single Source Range Length: 65535 00:25:47.549 Maximum Copy Length: 65535 00:25:47.549 Maximum Source Range Count: 1 00:25:47.549 NGUID/EUI64 Never Reused: No 00:25:47.549 Namespace Write Protected: No 00:25:47.549 Number of LBA Formats: 1 00:25:47.549 Current LBA Format: LBA Format #00 00:25:47.549 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:47.549 00:25:47.549 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:47.549 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:47.549 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.549 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:47.549 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.549 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:47.549 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:47.549 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:47.549 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:47.549 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:47.549 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:47.549 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:47.549 21:18:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:47.811 rmmod nvme_tcp 00:25:47.811 rmmod nvme_fabrics 00:25:47.811 rmmod nvme_keyring 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2200862 ']' 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2200862 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2200862 ']' 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2200862 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2200862 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2200862' 00:25:47.811 killing process with pid 2200862 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2200862 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2200862 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.811 21:18:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:50.407 00:25:50.407 real 0m12.632s 00:25:50.407 user 0m8.666s 00:25:50.407 sys 0m6.909s 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:50.407 ************************************ 00:25:50.407 END TEST nvmf_identify 00:25:50.407 ************************************ 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.407 ************************************ 00:25:50.407 START TEST nvmf_perf 00:25:50.407 ************************************ 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:50.407 * Looking for test storage... 00:25:50.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:50.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.407 --rc genhtml_branch_coverage=1 00:25:50.407 --rc genhtml_function_coverage=1 00:25:50.407 --rc genhtml_legend=1 00:25:50.407 --rc geninfo_all_blocks=1 00:25:50.407 --rc geninfo_unexecuted_blocks=1 00:25:50.407 00:25:50.407 ' 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:50.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.407 --rc genhtml_branch_coverage=1 00:25:50.407 --rc genhtml_function_coverage=1 00:25:50.407 --rc genhtml_legend=1 00:25:50.407 --rc geninfo_all_blocks=1 00:25:50.407 --rc geninfo_unexecuted_blocks=1 00:25:50.407 00:25:50.407 ' 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:50.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.407 --rc genhtml_branch_coverage=1 00:25:50.407 --rc genhtml_function_coverage=1 00:25:50.407 --rc genhtml_legend=1 00:25:50.407 --rc geninfo_all_blocks=1 00:25:50.407 --rc geninfo_unexecuted_blocks=1 00:25:50.407 00:25:50.407 ' 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:50.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.407 --rc genhtml_branch_coverage=1 00:25:50.407 --rc genhtml_function_coverage=1 00:25:50.407 --rc genhtml_legend=1 00:25:50.407 --rc geninfo_all_blocks=1 00:25:50.407 --rc geninfo_unexecuted_blocks=1 00:25:50.407 00:25:50.407 ' 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:50.407 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:50.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:50.408 21:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:58.571 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:58.571 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:58.571 Found net devices under 0000:31:00.0: cvl_0_0 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:58.571 Found net devices under 0000:31:00.1: cvl_0_1 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.571 21:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.571 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:58.832 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.832 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:58.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:25:58.833 00:25:58.833 --- 10.0.0.2 ping statistics --- 00:25:58.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.833 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:25:58.833 00:25:58.833 --- 10.0.0.1 ping statistics --- 00:25:58.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.833 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2205970 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2205970 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2205970 ']' 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.833 21:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:58.833 [2024-12-05 21:19:00.267694] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:25:58.833 [2024-12-05 21:19:00.267763] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.093 [2024-12-05 21:19:00.359560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:59.093 [2024-12-05 21:19:00.401667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.093 [2024-12-05 21:19:00.401706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.093 [2024-12-05 21:19:00.401714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.093 [2024-12-05 21:19:00.401721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.093 [2024-12-05 21:19:00.401726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.093 [2024-12-05 21:19:00.403568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.093 [2024-12-05 21:19:00.403693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:59.093 [2024-12-05 21:19:00.403852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.093 [2024-12-05 21:19:00.403853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:59.664 21:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:59.664 21:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:59.664 21:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:59.664 21:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:59.664 21:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:59.925 21:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.925 21:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:59.925 21:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:00.497 21:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:00.497 21:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:00.497 21:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:26:00.497 21:19:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:00.758 21:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:00.758 21:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:26:00.758 21:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:00.758 21:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:00.758 21:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:00.758 [2024-12-05 21:19:02.160991] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.758 21:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:01.018 21:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:01.018 21:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:01.278 21:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:01.278 21:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:01.278 21:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.538 [2024-12-05 21:19:02.859600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.538 21:19:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:01.798 21:19:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:26:01.798 21:19:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:01.798 21:19:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:01.798 21:19:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:03.181 Initializing NVMe Controllers 00:26:03.181 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:26:03.181 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:03.181 Initialization complete. Launching workers. 00:26:03.181 ======================================================== 00:26:03.181 Latency(us) 00:26:03.181 Device Information : IOPS MiB/s Average min max 00:26:03.181 PCIE (0000:65:00.0) NSID 1 from core 0: 79132.60 309.11 403.80 13.28 4998.07 00:26:03.181 ======================================================== 00:26:03.181 Total : 79132.60 309.11 403.80 13.28 4998.07 00:26:03.181 00:26:03.181 21:19:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:04.562 Initializing NVMe Controllers 00:26:04.562 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:04.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:04.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:04.562 Initialization complete. Launching workers. 00:26:04.562 ======================================================== 00:26:04.562 Latency(us) 00:26:04.562 Device Information : IOPS MiB/s Average min max 00:26:04.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 93.00 0.36 11093.63 103.28 45725.46 00:26:04.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16485.31 7949.84 47905.52 00:26:04.562 ======================================================== 00:26:04.562 Total : 154.00 0.60 13229.30 103.28 47905.52 00:26:04.562 00:26:04.562 21:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:05.944 Initializing NVMe Controllers 00:26:05.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:05.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:05.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:05.944 Initialization complete. Launching workers. 00:26:05.944 ======================================================== 00:26:05.944 Latency(us) 00:26:05.944 Device Information : IOPS MiB/s Average min max 00:26:05.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10394.77 40.60 3078.89 411.14 10293.36 00:26:05.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3639.21 14.22 8847.11 4803.24 16306.36 00:26:05.944 ======================================================== 00:26:05.944 Total : 14033.98 54.82 4574.67 411.14 16306.36 00:26:05.944 00:26:05.944 21:19:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:05.944 21:19:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:05.944 21:19:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:08.486 Initializing NVMe Controllers 00:26:08.486 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:08.486 Controller IO queue size 128, less than required. 00:26:08.486 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:08.486 Controller IO queue size 128, less than required. 00:26:08.486 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:08.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:08.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:08.486 Initialization complete. Launching workers. 00:26:08.486 ======================================================== 00:26:08.486 Latency(us) 00:26:08.486 Device Information : IOPS MiB/s Average min max 00:26:08.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1605.47 401.37 81029.01 51840.00 122302.18 00:26:08.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 608.99 152.25 219246.56 47219.49 362502.64 00:26:08.486 ======================================================== 00:26:08.486 Total : 2214.46 553.62 119039.62 47219.49 362502.64 00:26:08.486 00:26:08.486 21:19:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:08.745 No valid NVMe controllers or AIO or URING devices found 00:26:08.745 Initializing NVMe Controllers 00:26:08.745 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:08.745 Controller IO queue size 128, less than required. 00:26:08.746 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:08.746 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:08.746 Controller IO queue size 128, less than required. 00:26:08.746 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:08.746 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:08.746 WARNING: Some requested NVMe devices were skipped 00:26:08.746 21:19:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:11.286 Initializing NVMe Controllers 00:26:11.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:11.286 Controller IO queue size 128, less than required. 00:26:11.286 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:11.286 Controller IO queue size 128, less than required. 00:26:11.286 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:11.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:11.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:11.286 Initialization complete. Launching workers. 00:26:11.286 00:26:11.286 ==================== 00:26:11.286 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:11.286 TCP transport: 00:26:11.286 polls: 20264 00:26:11.286 idle_polls: 11090 00:26:11.286 sock_completions: 9174 00:26:11.286 nvme_completions: 6241 00:26:11.286 submitted_requests: 9380 00:26:11.286 queued_requests: 1 00:26:11.286 00:26:11.286 ==================== 00:26:11.286 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:11.286 TCP transport: 00:26:11.286 polls: 19932 00:26:11.286 idle_polls: 10035 00:26:11.286 sock_completions: 9897 00:26:11.286 nvme_completions: 6695 00:26:11.286 submitted_requests: 10064 00:26:11.286 queued_requests: 1 00:26:11.286 ======================================================== 00:26:11.286 Latency(us) 00:26:11.286 Device Information : IOPS MiB/s Average min max 00:26:11.286 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1560.00 390.00 83567.51 40727.30 158794.32 00:26:11.286 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1673.50 418.37 77230.36 32975.31 124203.19 00:26:11.286 ======================================================== 00:26:11.286 Total : 3233.50 808.37 80287.71 32975.31 158794.32 00:26:11.286 00:26:11.286 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:11.286 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:11.546 rmmod nvme_tcp 00:26:11.546 rmmod nvme_fabrics 00:26:11.546 rmmod nvme_keyring 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2205970 ']' 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2205970 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2205970 ']' 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2205970 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2205970 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2205970' 00:26:11.546 killing process with pid 2205970 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2205970 00:26:11.546 21:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2205970 00:26:13.456 21:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:13.456 21:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:13.456 21:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:13.456 21:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:26:13.456 21:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:26:13.456 21:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:13.456 21:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:26:13.456 21:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:13.456 21:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:13.456 21:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.456 21:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.456 21:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.993 21:19:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:15.993 00:26:15.993 real 0m25.518s 00:26:15.993 user 0m59.543s 00:26:15.993 sys 0m9.210s 00:26:15.993 21:19:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.993 21:19:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:15.993 ************************************ 00:26:15.993 END TEST nvmf_perf 00:26:15.993 ************************************ 00:26:15.993 21:19:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:15.993 21:19:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:15.993 21:19:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.993 21:19:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.993 ************************************ 00:26:15.993 START TEST nvmf_fio_host 00:26:15.993 ************************************ 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:15.993 * Looking for test storage... 00:26:15.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.993 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:15.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.994 --rc genhtml_branch_coverage=1 00:26:15.994 --rc genhtml_function_coverage=1 00:26:15.994 --rc genhtml_legend=1 00:26:15.994 --rc geninfo_all_blocks=1 00:26:15.994 --rc geninfo_unexecuted_blocks=1 00:26:15.994 00:26:15.994 ' 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:15.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.994 --rc genhtml_branch_coverage=1 00:26:15.994 --rc genhtml_function_coverage=1 00:26:15.994 --rc genhtml_legend=1 00:26:15.994 --rc geninfo_all_blocks=1 00:26:15.994 --rc geninfo_unexecuted_blocks=1 00:26:15.994 00:26:15.994 ' 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:15.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.994 --rc genhtml_branch_coverage=1 00:26:15.994 --rc genhtml_function_coverage=1 00:26:15.994 --rc genhtml_legend=1 00:26:15.994 --rc geninfo_all_blocks=1 00:26:15.994 --rc geninfo_unexecuted_blocks=1 00:26:15.994 00:26:15.994 ' 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:15.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.994 --rc genhtml_branch_coverage=1 00:26:15.994 --rc genhtml_function_coverage=1 00:26:15.994 --rc genhtml_legend=1 00:26:15.994 --rc geninfo_all_blocks=1 00:26:15.994 --rc geninfo_unexecuted_blocks=1 00:26:15.994 00:26:15.994 ' 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:15.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:15.994 21:19:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:24.123 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:24.124 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:24.124 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:24.124 Found net devices under 0000:31:00.0: cvl_0_0 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:24.124 Found net devices under 0000:31:00.1: cvl_0_1 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:24.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:26:24.124 00:26:24.124 --- 10.0.0.2 ping statistics --- 00:26:24.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.124 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:24.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:26:24.124 00:26:24.124 --- 10.0.0.1 ping statistics --- 00:26:24.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.124 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:24.124 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:24.384 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:24.384 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:24.384 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:24.384 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.384 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2213535 00:26:24.384 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:24.384 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:24.384 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2213535 00:26:24.384 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2213535 ']' 00:26:24.384 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.384 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.384 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.384 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.384 21:19:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.384 [2024-12-05 21:19:25.639625] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:26:24.384 [2024-12-05 21:19:25.639694] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.384 [2024-12-05 21:19:25.730871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:24.384 [2024-12-05 21:19:25.772089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.384 [2024-12-05 21:19:25.772127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.384 [2024-12-05 21:19:25.772134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.384 [2024-12-05 21:19:25.772141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.384 [2024-12-05 21:19:25.772147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.384 [2024-12-05 21:19:25.773811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.384 [2024-12-05 21:19:25.773995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:24.384 [2024-12-05 21:19:25.774135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.384 [2024-12-05 21:19:25.774136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:25.323 21:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.323 21:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:26:25.323 21:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:25.324 [2024-12-05 21:19:26.606379] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.324 21:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:25.324 21:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:25.324 21:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.324 21:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:25.583 Malloc1 00:26:25.583 21:19:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:25.842 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:25.842 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:26.102 [2024-12-05 21:19:27.410692] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.102 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:26.361 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:26.361 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:26.361 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:26.361 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:26.362 21:19:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:26.622 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:26.622 fio-3.35 00:26:26.622 Starting 1 thread 00:26:29.169 00:26:29.169 test: (groupid=0, jobs=1): err= 0: pid=2214244: Thu Dec 5 21:19:30 2024 00:26:29.169 read: IOPS=13.7k, BW=53.7MiB/s (56.3MB/s)(108MiB/2004msec) 00:26:29.169 slat (usec): min=2, max=225, avg= 2.15, stdev= 1.94 00:26:29.169 clat (usec): min=3069, max=8917, avg=5130.21, stdev=453.01 00:26:29.169 lat (usec): min=3104, max=8919, avg=5132.36, stdev=453.10 00:26:29.169 clat percentiles (usec): 00:26:29.169 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:26:29.169 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:26:29.169 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:26:29.169 | 99.00th=[ 7308], 99.50th=[ 7635], 99.90th=[ 8160], 99.95th=[ 8225], 00:26:29.169 | 99.99th=[ 8586] 00:26:29.169 bw ( KiB/s): min=52184, max=55848, per=99.94%, avg=54904.00, stdev=1813.63, samples=4 00:26:29.169 iops : min=13046, max=13962, avg=13726.00, stdev=453.41, samples=4 00:26:29.169 write: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(107MiB/2004msec); 0 zone resets 00:26:29.169 slat (usec): min=2, max=214, avg= 2.22, stdev= 1.44 00:26:29.169 clat (usec): min=2384, max=7527, avg=4142.20, stdev=377.77 00:26:29.169 lat (usec): min=2402, max=7529, avg=4144.42, stdev=377.90 00:26:29.169 clat percentiles (usec): 00:26:29.169 | 1.00th=[ 3490], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3884], 00:26:29.169 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:26:29.169 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621], 00:26:29.169 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 6652], 99.95th=[ 6718], 00:26:29.169 | 99.99th=[ 7046] 00:26:29.169 bw ( KiB/s): min=52544, max=55784, per=100.00%, avg=54834.00, stdev=1539.71, samples=4 00:26:29.169 iops : min=13136, max=13946, avg=13708.50, stdev=384.93, samples=4 00:26:29.169 lat (msec) : 4=16.86%, 10=83.14% 00:26:29.169 cpu : usr=74.49%, sys=24.06%, ctx=35, majf=0, minf=16 00:26:29.169 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:29.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:29.169 issued rwts: total=27524,27472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.169 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:29.169 00:26:29.169 Run status group 0 (all jobs): 00:26:29.169 READ: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=108MiB (113MB), run=2004-2004msec 00:26:29.169 WRITE: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=107MiB (113MB), run=2004-2004msec 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:29.169 21:19:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:29.739 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:29.739 fio-3.35 00:26:29.739 Starting 1 thread 00:26:32.285 00:26:32.285 test: (groupid=0, jobs=1): err= 0: pid=2214902: Thu Dec 5 21:19:33 2024 00:26:32.285 read: IOPS=9557, BW=149MiB/s (157MB/s)(300MiB/2006msec) 00:26:32.285 slat (usec): min=3, max=109, avg= 3.63, stdev= 1.59 00:26:32.285 clat (usec): min=1519, max=15923, avg=8128.85, stdev=2054.84 00:26:32.285 lat (usec): min=1523, max=15927, avg=8132.48, stdev=2054.97 00:26:32.285 clat percentiles (usec): 00:26:32.285 | 1.00th=[ 4178], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6259], 00:26:32.285 | 30.00th=[ 6849], 40.00th=[ 7373], 50.00th=[ 7963], 60.00th=[ 8586], 00:26:32.285 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10945], 95.00th=[11338], 00:26:32.285 | 99.00th=[13304], 99.50th=[13960], 99.90th=[14746], 99.95th=[15401], 00:26:32.285 | 99.99th=[15533] 00:26:32.285 bw ( KiB/s): min=62432, max=96480, per=49.07%, avg=75040.00, stdev=15428.03, samples=4 00:26:32.285 iops : min= 3902, max= 6030, avg=4690.00, stdev=964.25, samples=4 00:26:32.285 write: IOPS=5763, BW=90.1MiB/s (94.4MB/s)(154MiB/1711msec); 0 zone resets 00:26:32.285 slat (usec): min=39, max=452, avg=41.13, stdev= 8.52 00:26:32.285 clat (usec): min=2065, max=17081, avg=9294.52, stdev=1721.47 00:26:32.285 lat (usec): min=2105, max=17121, avg=9335.65, stdev=1723.27 00:26:32.285 clat percentiles (usec): 00:26:32.285 | 1.00th=[ 6063], 5.00th=[ 6915], 10.00th=[ 7373], 20.00th=[ 7898], 00:26:32.285 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:26:32.285 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11469], 95.00th=[12387], 00:26:32.285 | 99.00th=[14484], 99.50th=[15139], 99.90th=[16057], 99.95th=[16319], 00:26:32.285 | 99.99th=[17171] 00:26:32.285 bw ( KiB/s): min=64704, max=99808, per=84.97%, avg=78352.00, stdev=15496.96, samples=4 00:26:32.285 iops : min= 4044, max= 6238, avg=4897.00, stdev=968.56, samples=4 00:26:32.285 lat (msec) : 2=0.05%, 4=0.55%, 10=76.99%, 20=22.41% 00:26:32.285 cpu : usr=88.43%, sys=10.52%, ctx=17, majf=0, minf=44 00:26:32.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:32.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:32.285 issued rwts: total=19173,9861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:32.285 00:26:32.285 Run status group 0 (all jobs): 00:26:32.285 READ: bw=149MiB/s (157MB/s), 149MiB/s-149MiB/s (157MB/s-157MB/s), io=300MiB (314MB), run=2006-2006msec 00:26:32.285 WRITE: bw=90.1MiB/s (94.4MB/s), 90.1MiB/s-90.1MiB/s (94.4MB/s-94.4MB/s), io=154MiB (162MB), run=1711-1711msec 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:32.285 rmmod nvme_tcp 00:26:32.285 rmmod nvme_fabrics 00:26:32.285 rmmod nvme_keyring 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2213535 ']' 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2213535 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2213535 ']' 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2213535 00:26:32.285 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:26:32.286 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.286 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2213535 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2213535' 00:26:32.548 killing process with pid 2213535 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2213535 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2213535 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.548 21:19:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.090 21:19:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:35.090 00:26:35.090 real 0m18.939s 00:26:35.090 user 1m11.299s 00:26:35.090 sys 0m8.196s 00:26:35.090 21:19:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:35.090 21:19:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.090 ************************************ 00:26:35.090 END TEST nvmf_fio_host 00:26:35.090 ************************************ 00:26:35.090 21:19:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:35.090 21:19:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:35.090 21:19:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:35.090 21:19:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.090 ************************************ 00:26:35.090 START TEST nvmf_failover 00:26:35.090 ************************************ 00:26:35.090 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:35.090 * Looking for test storage... 00:26:35.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:35.090 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:35.090 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:26:35.090 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:35.090 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:35.090 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:35.090 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:35.090 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.091 --rc genhtml_branch_coverage=1 00:26:35.091 --rc genhtml_function_coverage=1 00:26:35.091 --rc genhtml_legend=1 00:26:35.091 --rc geninfo_all_blocks=1 00:26:35.091 --rc geninfo_unexecuted_blocks=1 00:26:35.091 00:26:35.091 ' 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.091 --rc genhtml_branch_coverage=1 00:26:35.091 --rc genhtml_function_coverage=1 00:26:35.091 --rc genhtml_legend=1 00:26:35.091 --rc geninfo_all_blocks=1 00:26:35.091 --rc geninfo_unexecuted_blocks=1 00:26:35.091 00:26:35.091 ' 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.091 --rc genhtml_branch_coverage=1 00:26:35.091 --rc genhtml_function_coverage=1 00:26:35.091 --rc genhtml_legend=1 00:26:35.091 --rc geninfo_all_blocks=1 00:26:35.091 --rc geninfo_unexecuted_blocks=1 00:26:35.091 00:26:35.091 ' 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.091 --rc genhtml_branch_coverage=1 00:26:35.091 --rc genhtml_function_coverage=1 00:26:35.091 --rc genhtml_legend=1 00:26:35.091 --rc geninfo_all_blocks=1 00:26:35.091 --rc geninfo_unexecuted_blocks=1 00:26:35.091 00:26:35.091 ' 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:35.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:35.091 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:35.092 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:26:35.092 21:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.238 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:43.239 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:43.239 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:43.239 Found net devices under 0000:31:00.0: cvl_0_0 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:43.239 Found net devices under 0000:31:00.1: cvl_0_1 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:43.239 21:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:43.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:43.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:26:43.239 00:26:43.239 --- 10.0.0.2 ping statistics --- 00:26:43.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.239 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:43.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:43.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:26:43.239 00:26:43.239 --- 10.0.0.1 ping statistics --- 00:26:43.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.239 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2220024 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2220024 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2220024 ']' 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.239 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.240 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.240 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.240 21:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:43.240 [2024-12-05 21:19:44.286563] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:26:43.240 [2024-12-05 21:19:44.286633] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.240 [2024-12-05 21:19:44.396832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:43.240 [2024-12-05 21:19:44.450146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.240 [2024-12-05 21:19:44.450221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.240 [2024-12-05 21:19:44.450231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.240 [2024-12-05 21:19:44.450238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.240 [2024-12-05 21:19:44.450244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.240 [2024-12-05 21:19:44.452098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.240 [2024-12-05 21:19:44.452398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.240 [2024-12-05 21:19:44.452400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.811 21:19:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.811 21:19:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:43.811 21:19:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:43.811 21:19:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:43.811 21:19:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:43.811 21:19:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.811 21:19:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:44.072 [2024-12-05 21:19:45.274107] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.072 21:19:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:44.072 Malloc0 00:26:44.333 21:19:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:44.333 21:19:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:44.595 21:19:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.595 [2024-12-05 21:19:46.025294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.855 21:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:44.855 [2024-12-05 21:19:46.209765] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:44.855 21:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:45.116 [2024-12-05 21:19:46.394358] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:45.116 21:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2220596 00:26:45.116 21:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:45.116 21:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:45.116 21:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2220596 /var/tmp/bdevperf.sock 00:26:45.116 21:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2220596 ']' 00:26:45.116 21:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:45.116 21:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:45.116 21:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:45.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:45.116 21:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:45.116 21:19:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:46.060 21:19:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.060 21:19:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:46.060 21:19:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:46.320 NVMe0n1 00:26:46.320 21:19:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:46.582 00:26:46.582 21:19:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2220829 00:26:46.582 21:19:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:46.582 21:19:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:47.525 21:19:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.786 [2024-12-05 21:19:49.091190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.786 [2024-12-05 21:19:49.091223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.786 [2024-12-05 21:19:49.091229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.786 [2024-12-05 21:19:49.091234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.786 [2024-12-05 21:19:49.091239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.786 [2024-12-05 21:19:49.091244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.786 [2024-12-05 21:19:49.091249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.786 [2024-12-05 21:19:49.091258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.786 [2024-12-05 21:19:49.091263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.786 [2024-12-05 21:19:49.091267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 [2024-12-05 21:19:49.091481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596510 is same with the state(6) to be set 00:26:47.787 21:19:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:51.089 21:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:51.089 00:26:51.089 21:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:51.350 [2024-12-05 21:19:52.642226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.350 [2024-12-05 21:19:52.642369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.351 [2024-12-05 21:19:52.642373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.351 [2024-12-05 21:19:52.642378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.351 [2024-12-05 21:19:52.642383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.351 [2024-12-05 21:19:52.642387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.351 [2024-12-05 21:19:52.642394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.351 [2024-12-05 21:19:52.642399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.351 [2024-12-05 21:19:52.642403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.351 [2024-12-05 21:19:52.642408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.351 [2024-12-05 21:19:52.642412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.351 [2024-12-05 21:19:52.642417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.351 [2024-12-05 21:19:52.642422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.351 [2024-12-05 21:19:52.642426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.351 [2024-12-05 21:19:52.642431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596fc0 is same with the state(6) to be set 00:26:51.351 21:19:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:54.699 21:19:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:54.699 [2024-12-05 21:19:55.831540] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.699 21:19:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:55.639 21:19:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:55.639 [2024-12-05 21:19:57.016673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.639 [2024-12-05 21:19:57.016967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.640 [2024-12-05 21:19:57.016972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.640 [2024-12-05 21:19:57.016977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.640 [2024-12-05 21:19:57.016981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.640 [2024-12-05 21:19:57.016986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.640 [2024-12-05 21:19:57.016990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.640 [2024-12-05 21:19:57.016995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.640 [2024-12-05 21:19:57.016999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.640 [2024-12-05 21:19:57.017005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.640 [2024-12-05 21:19:57.017010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.640 [2024-12-05 21:19:57.017015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.640 [2024-12-05 21:19:57.017019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.640 [2024-12-05 21:19:57.017024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c620 is same with the state(6) to be set 00:26:55.640 21:19:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2220829 00:27:02.403 { 00:27:02.403 "results": [ 00:27:02.403 { 00:27:02.403 "job": "NVMe0n1", 00:27:02.403 "core_mask": "0x1", 00:27:02.403 "workload": "verify", 00:27:02.403 "status": "finished", 00:27:02.403 "verify_range": { 00:27:02.403 "start": 0, 00:27:02.403 "length": 16384 00:27:02.403 }, 00:27:02.403 "queue_depth": 128, 00:27:02.403 "io_size": 4096, 00:27:02.403 "runtime": 15.005089, 00:27:02.403 "iops": 11096.235417197458, 00:27:02.403 "mibps": 43.34466959842757, 00:27:02.403 "io_failed": 8420, 00:27:02.403 "io_timeout": 0, 00:27:02.403 "avg_latency_us": 10951.842890464211, 00:27:02.403 "min_latency_us": 542.72, 00:27:02.403 "max_latency_us": 20097.706666666665 00:27:02.403 } 00:27:02.403 ], 00:27:02.403 "core_count": 1 00:27:02.403 } 00:27:02.403 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2220596 00:27:02.403 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2220596 ']' 00:27:02.403 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2220596 00:27:02.403 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:02.403 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:02.403 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2220596 00:27:02.403 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:02.403 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:02.403 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2220596' 00:27:02.403 killing process with pid 2220596 00:27:02.403 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2220596 00:27:02.403 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2220596 00:27:02.403 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:02.403 [2024-12-05 21:19:46.475617] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:27:02.403 [2024-12-05 21:19:46.475678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220596 ] 00:27:02.403 [2024-12-05 21:19:46.553269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.403 [2024-12-05 21:19:46.588951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.403 Running I/O for 15 seconds... 00:27:02.403 10983.00 IOPS, 42.90 MiB/s [2024-12-05T20:20:03.840Z] [2024-12-05 21:19:49.093481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.403 [2024-12-05 21:19:49.093801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.403 [2024-12-05 21:19:49.093818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.403 [2024-12-05 21:19:49.093835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.403 [2024-12-05 21:19:49.093852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.403 [2024-12-05 21:19:49.093861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.403 [2024-12-05 21:19:49.093874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.093883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.093890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.093899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.093907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.093916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.093926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.093935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.093942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.093952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.093959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.093968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.093975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.093985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.093992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.404 [2024-12-05 21:19:49.094535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.404 [2024-12-05 21:19:49.094542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.405 [2024-12-05 21:19:49.094864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.405 [2024-12-05 21:19:49.094894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95464 len:8 PRP1 0x0 PRP2 0x0 00:27:02.405 [2024-12-05 21:19:49.094902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.405 [2024-12-05 21:19:49.094918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.405 [2024-12-05 21:19:49.094924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95472 len:8 PRP1 0x0 PRP2 0x0 00:27:02.405 [2024-12-05 21:19:49.094931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.405 [2024-12-05 21:19:49.094945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.405 [2024-12-05 21:19:49.094952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95480 len:8 PRP1 0x0 PRP2 0x0 00:27:02.405 [2024-12-05 21:19:49.094959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.405 [2024-12-05 21:19:49.094972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.405 [2024-12-05 21:19:49.094978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95488 len:8 PRP1 0x0 PRP2 0x0 00:27:02.405 [2024-12-05 21:19:49.094985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.094993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.405 [2024-12-05 21:19:49.095000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.405 [2024-12-05 21:19:49.095006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95496 len:8 PRP1 0x0 PRP2 0x0 00:27:02.405 [2024-12-05 21:19:49.095014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.095022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.405 [2024-12-05 21:19:49.095027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.405 [2024-12-05 21:19:49.095033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95504 len:8 PRP1 0x0 PRP2 0x0 00:27:02.405 [2024-12-05 21:19:49.095040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.095048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.405 [2024-12-05 21:19:49.095054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.405 [2024-12-05 21:19:49.095060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95512 len:8 PRP1 0x0 PRP2 0x0 00:27:02.405 [2024-12-05 21:19:49.095067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.095074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.405 [2024-12-05 21:19:49.095080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.405 [2024-12-05 21:19:49.095087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95520 len:8 PRP1 0x0 PRP2 0x0 00:27:02.405 [2024-12-05 21:19:49.095094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.095102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.405 [2024-12-05 21:19:49.095108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.405 [2024-12-05 21:19:49.095115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95528 len:8 PRP1 0x0 PRP2 0x0 00:27:02.405 [2024-12-05 21:19:49.095123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.095130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.405 [2024-12-05 21:19:49.095136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.405 [2024-12-05 21:19:49.095142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94840 len:8 PRP1 0x0 PRP2 0x0 00:27:02.405 [2024-12-05 21:19:49.095150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.095158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.405 [2024-12-05 21:19:49.095164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.405 [2024-12-05 21:19:49.095170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94848 len:8 PRP1 0x0 PRP2 0x0 00:27:02.405 [2024-12-05 21:19:49.095177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.095185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.405 [2024-12-05 21:19:49.095191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.405 [2024-12-05 21:19:49.095198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94856 len:8 PRP1 0x0 PRP2 0x0 00:27:02.405 [2024-12-05 21:19:49.095205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.405 [2024-12-05 21:19:49.095215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94864 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94872 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94880 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94888 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95536 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95544 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95552 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95560 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95568 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95576 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95584 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95592 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95600 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95608 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95616 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95624 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95632 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95640 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95648 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.095736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.095743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95656 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.095750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.095758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.106514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.106543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95664 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.106555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.106568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.106574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.106580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95672 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.106588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.106596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.106602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.406 [2024-12-05 21:19:49.106608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95680 len:8 PRP1 0x0 PRP2 0x0 00:27:02.406 [2024-12-05 21:19:49.106616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.406 [2024-12-05 21:19:49.106628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.406 [2024-12-05 21:19:49.106634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.407 [2024-12-05 21:19:49.106640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95688 len:8 PRP1 0x0 PRP2 0x0 00:27:02.407 [2024-12-05 21:19:49.106648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.106656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.407 [2024-12-05 21:19:49.106662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.407 [2024-12-05 21:19:49.106668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95696 len:8 PRP1 0x0 PRP2 0x0 00:27:02.407 [2024-12-05 21:19:49.106676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.106683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.407 [2024-12-05 21:19:49.106689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.407 [2024-12-05 21:19:49.106696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95704 len:8 PRP1 0x0 PRP2 0x0 00:27:02.407 [2024-12-05 21:19:49.106703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.106711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.407 [2024-12-05 21:19:49.106717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.407 [2024-12-05 21:19:49.106723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95712 len:8 PRP1 0x0 PRP2 0x0 00:27:02.407 [2024-12-05 21:19:49.106731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.106739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.407 [2024-12-05 21:19:49.106745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.407 [2024-12-05 21:19:49.106751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95720 len:8 PRP1 0x0 PRP2 0x0 00:27:02.407 [2024-12-05 21:19:49.106759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.106767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.407 [2024-12-05 21:19:49.106772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.407 [2024-12-05 21:19:49.106779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94896 len:8 PRP1 0x0 PRP2 0x0 00:27:02.407 [2024-12-05 21:19:49.106786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.106794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.407 [2024-12-05 21:19:49.106800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.407 [2024-12-05 21:19:49.106807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94904 len:8 PRP1 0x0 PRP2 0x0 00:27:02.407 [2024-12-05 21:19:49.106814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.106822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.407 [2024-12-05 21:19:49.106828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.407 [2024-12-05 21:19:49.106834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94912 len:8 PRP1 0x0 PRP2 0x0 00:27:02.407 [2024-12-05 21:19:49.106842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.106852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.407 [2024-12-05 21:19:49.106857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.407 [2024-12-05 21:19:49.106868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94920 len:8 PRP1 0x0 PRP2 0x0 00:27:02.407 [2024-12-05 21:19:49.106875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.106883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.407 [2024-12-05 21:19:49.106889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.407 [2024-12-05 21:19:49.106896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94928 len:8 PRP1 0x0 PRP2 0x0 00:27:02.407 [2024-12-05 21:19:49.106903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.106911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.407 [2024-12-05 21:19:49.106917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.407 [2024-12-05 21:19:49.106923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94936 len:8 PRP1 0x0 PRP2 0x0 00:27:02.407 [2024-12-05 21:19:49.106931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.106938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.407 [2024-12-05 21:19:49.106944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.407 [2024-12-05 21:19:49.106951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94944 len:8 PRP1 0x0 PRP2 0x0 00:27:02.407 [2024-12-05 21:19:49.106958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.106966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.407 [2024-12-05 21:19:49.106972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.407 [2024-12-05 21:19:49.106978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94952 len:8 PRP1 0x0 PRP2 0x0 00:27:02.407 [2024-12-05 21:19:49.106985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.107027] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:02.407 [2024-12-05 21:19:49.107055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.407 [2024-12-05 21:19:49.107065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.107075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.407 [2024-12-05 21:19:49.107082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.107091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.407 [2024-12-05 21:19:49.107099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.107107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.407 [2024-12-05 21:19:49.107116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.407 [2024-12-05 21:19:49.107124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:02.407 [2024-12-05 21:19:49.107173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2048930 (9): Bad file descriptor 00:27:02.407 [2024-12-05 21:19:49.110749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:02.407 [2024-12-05 21:19:49.180057] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:27:02.407 10581.00 IOPS, 41.33 MiB/s [2024-12-05T20:20:03.844Z] 10811.00 IOPS, 42.23 MiB/s [2024-12-05T20:20:03.844Z] 10937.50 IOPS, 42.72 MiB/s [2024-12-05T20:20:03.844Z] [2024-12-05 21:19:52.643821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.407 [2024-12-05 21:19:52.643856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.643878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.643886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.643897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:33600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.643906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.643916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.643923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.643933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.643941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.643951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.643958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.643968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.643976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.643985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.643993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:33688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.408 [2024-12-05 21:19:52.644547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.408 [2024-12-05 21:19:52.644556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.644834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.409 [2024-12-05 21:19:52.644958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.644975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.644984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.644992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.645001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.645008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.645018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.645025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.645034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.645041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.645050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.645058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.645067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.645075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.645084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.645091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.645101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.645109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.645118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.645130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.645140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.645148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.645157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.645165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.645175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.645182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.645191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.645199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.645208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.645216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.645225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.409 [2024-12-05 21:19:52.645233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.409 [2024-12-05 21:19:52.645243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.410 [2024-12-05 21:19:52.645905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.410 [2024-12-05 21:19:52.645914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.411 [2024-12-05 21:19:52.645921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:52.645930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.411 [2024-12-05 21:19:52.645937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:52.645946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.411 [2024-12-05 21:19:52.645954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:52.645963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.411 [2024-12-05 21:19:52.645970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:52.645981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.411 [2024-12-05 21:19:52.645988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:52.645997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.411 [2024-12-05 21:19:52.646005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:52.646014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:52.646022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:52.646047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.411 [2024-12-05 21:19:52.646055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.411 [2024-12-05 21:19:52.646062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34096 len:8 PRP1 0x0 PRP2 0x0 00:27:02.411 [2024-12-05 21:19:52.646070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:52.646111] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:02.411 [2024-12-05 21:19:52.646132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.411 [2024-12-05 21:19:52.646140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:52.646149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.411 [2024-12-05 21:19:52.646156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:52.646165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.411 [2024-12-05 21:19:52.646172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:52.646180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.411 [2024-12-05 21:19:52.646187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:52.646195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:27:02.411 [2024-12-05 21:19:52.646228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2048930 (9): Bad file descriptor 00:27:02.411 [2024-12-05 21:19:52.649797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:27:02.411 [2024-12-05 21:19:52.760656] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:27:02.411 10806.40 IOPS, 42.21 MiB/s [2024-12-05T20:20:03.848Z] 10914.00 IOPS, 42.63 MiB/s [2024-12-05T20:20:03.848Z] 10977.43 IOPS, 42.88 MiB/s [2024-12-05T20:20:03.848Z] 11074.50 IOPS, 43.26 MiB/s [2024-12-05T20:20:03.848Z] [2024-12-05 21:19:57.018279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.411 [2024-12-05 21:19:57.018699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.411 [2024-12-05 21:19:57.018709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.018986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.018993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.412 [2024-12-05 21:19:57.019306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.412 [2024-12-05 21:19:57.019314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.413 [2024-12-05 21:19:57.019330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.413 [2024-12-05 21:19:57.019347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.413 [2024-12-05 21:19:57.019364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.413 [2024-12-05 21:19:57.019381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.413 [2024-12-05 21:19:57.019397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.413 [2024-12-05 21:19:57.019416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.413 [2024-12-05 21:19:57.019433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.413 [2024-12-05 21:19:57.019451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.413 [2024-12-05 21:19:57.019468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.413 [2024-12-05 21:19:57.019973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.413 [2024-12-05 21:19:57.019982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.019990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.019999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.414 [2024-12-05 21:19:57.020459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.414 [2024-12-05 21:19:57.020491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.414 [2024-12-05 21:19:57.020498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66576 len:8 PRP1 0x0 PRP2 0x0 00:27:02.414 [2024-12-05 21:19:57.020506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020548] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:02.414 [2024-12-05 21:19:57.020569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.414 [2024-12-05 21:19:57.020578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.414 [2024-12-05 21:19:57.020594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.414 [2024-12-05 21:19:57.020610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.414 [2024-12-05 21:19:57.020625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.414 [2024-12-05 21:19:57.020633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:02.414 [2024-12-05 21:19:57.020667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2048930 (9): Bad file descriptor 00:27:02.414 [2024-12-05 21:19:57.024235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:02.414 11050.33 IOPS, 43.17 MiB/s [2024-12-05T20:20:03.851Z] [2024-12-05 21:19:57.058028] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:27:02.414 11040.00 IOPS, 43.12 MiB/s [2024-12-05T20:20:03.851Z] 11064.36 IOPS, 43.22 MiB/s [2024-12-05T20:20:03.851Z] 11073.75 IOPS, 43.26 MiB/s [2024-12-05T20:20:03.851Z] 11091.92 IOPS, 43.33 MiB/s [2024-12-05T20:20:03.851Z] 11100.36 IOPS, 43.36 MiB/s 00:27:02.414 Latency(us) 00:27:02.414 [2024-12-05T20:20:03.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.414 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:02.414 Verification LBA range: start 0x0 length 0x4000 00:27:02.414 NVMe0n1 : 15.01 11096.24 43.34 561.14 0.00 10951.84 542.72 20097.71 00:27:02.414 [2024-12-05T20:20:03.851Z] =================================================================================================================== 00:27:02.414 [2024-12-05T20:20:03.851Z] Total : 11096.24 43.34 561.14 0.00 10951.84 542.72 20097.71 00:27:02.415 Received shutdown signal, test time was about 15.000000 seconds 00:27:02.415 00:27:02.415 Latency(us) 00:27:02.415 [2024-12-05T20:20:03.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.415 [2024-12-05T20:20:03.852Z] =================================================================================================================== 00:27:02.415 [2024-12-05T20:20:03.852Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:02.415 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:02.415 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:02.415 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:02.415 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2223702 00:27:02.415 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2223702 /var/tmp/bdevperf.sock 00:27:02.415 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:02.415 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2223702 ']' 00:27:02.415 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:02.415 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.415 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:02.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:02.415 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.415 21:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:02.988 21:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.988 21:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:02.988 21:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:02.988 [2024-12-05 21:20:04.278469] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:02.988 21:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:03.249 [2024-12-05 21:20:04.450877] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:03.249 21:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:03.511 NVMe0n1 00:27:03.511 21:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:03.773 00:27:03.773 21:20:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:04.036 00:27:04.036 21:20:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:04.036 21:20:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:04.296 21:20:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:04.556 21:20:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:07.862 21:20:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:07.862 21:20:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:07.862 21:20:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2224964 00:27:07.862 21:20:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:07.862 21:20:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2224964 00:27:08.806 { 00:27:08.806 "results": [ 00:27:08.806 { 00:27:08.806 "job": "NVMe0n1", 00:27:08.806 "core_mask": "0x1", 00:27:08.806 "workload": "verify", 00:27:08.806 "status": "finished", 00:27:08.806 "verify_range": { 00:27:08.806 "start": 0, 00:27:08.806 "length": 16384 00:27:08.806 }, 00:27:08.806 "queue_depth": 128, 00:27:08.806 "io_size": 4096, 00:27:08.806 "runtime": 1.006837, 00:27:08.806 "iops": 10428.698985039286, 00:27:08.806 "mibps": 40.73710541030971, 00:27:08.806 "io_failed": 0, 00:27:08.806 "io_timeout": 0, 00:27:08.806 "avg_latency_us": 12196.53827047619, 00:27:08.806 "min_latency_us": 1454.08, 00:27:08.806 "max_latency_us": 10048.853333333333 00:27:08.806 } 00:27:08.806 ], 00:27:08.806 "core_count": 1 00:27:08.806 } 00:27:08.806 21:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:08.806 [2024-12-05 21:20:03.335404] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:27:08.806 [2024-12-05 21:20:03.335463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223702 ] 00:27:08.806 [2024-12-05 21:20:03.413962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.806 [2024-12-05 21:20:03.449251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.806 [2024-12-05 21:20:05.764553] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:08.806 [2024-12-05 21:20:05.764599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.806 [2024-12-05 21:20:05.764615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.806 [2024-12-05 21:20:05.764627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.806 [2024-12-05 21:20:05.764634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.806 [2024-12-05 21:20:05.764643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.806 [2024-12-05 21:20:05.764650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.806 [2024-12-05 21:20:05.764658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.806 [2024-12-05 21:20:05.764665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.806 [2024-12-05 21:20:05.764673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:27:08.806 [2024-12-05 21:20:05.764703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:27:08.806 [2024-12-05 21:20:05.764719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bde930 (9): Bad file descriptor 00:27:08.806 [2024-12-05 21:20:05.815335] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:27:08.806 Running I/O for 1 seconds... 00:27:08.806 10341.00 IOPS, 40.39 MiB/s 00:27:08.806 Latency(us) 00:27:08.806 [2024-12-05T20:20:10.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.806 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:08.806 Verification LBA range: start 0x0 length 0x4000 00:27:08.806 NVMe0n1 : 1.01 10428.70 40.74 0.00 0.00 12196.54 1454.08 10048.85 00:27:08.806 [2024-12-05T20:20:10.243Z] =================================================================================================================== 00:27:08.806 [2024-12-05T20:20:10.244Z] Total : 10428.70 40.74 0.00 0.00 12196.54 1454.08 10048.85 00:27:08.807 21:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:08.807 21:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:09.067 21:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:09.067 21:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:09.067 21:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:09.328 21:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:09.590 21:20:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:12.896 21:20:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:12.896 21:20:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:12.896 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2223702 00:27:12.896 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2223702 ']' 00:27:12.896 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2223702 00:27:12.896 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:12.896 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.896 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2223702 00:27:12.896 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:12.896 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:12.896 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2223702' 00:27:12.896 killing process with pid 2223702 00:27:12.896 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2223702 00:27:12.896 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2223702 00:27:12.896 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:12.896 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.157 rmmod nvme_tcp 00:27:13.157 rmmod nvme_fabrics 00:27:13.157 rmmod nvme_keyring 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2220024 ']' 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2220024 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2220024 ']' 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2220024 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2220024 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2220024' 00:27:13.157 killing process with pid 2220024 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2220024 00:27:13.157 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2220024 00:27:13.418 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:13.418 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:13.418 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:13.418 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:27:13.418 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:27:13.418 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:13.418 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:27:13.418 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:13.418 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:13.419 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.419 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.419 21:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.332 21:20:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:15.332 00:27:15.332 real 0m40.735s 00:27:15.332 user 2m3.859s 00:27:15.332 sys 0m9.011s 00:27:15.332 21:20:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:15.332 21:20:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:15.332 ************************************ 00:27:15.332 END TEST nvmf_failover 00:27:15.332 ************************************ 00:27:15.593 21:20:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:15.593 21:20:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:15.593 21:20:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:15.593 21:20:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.593 ************************************ 00:27:15.593 START TEST nvmf_host_discovery 00:27:15.593 ************************************ 00:27:15.593 21:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:15.593 * Looking for test storage... 00:27:15.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:15.593 21:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:15.593 21:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:27:15.593 21:20:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:15.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.593 --rc genhtml_branch_coverage=1 00:27:15.593 --rc genhtml_function_coverage=1 00:27:15.593 --rc genhtml_legend=1 00:27:15.593 --rc geninfo_all_blocks=1 00:27:15.593 --rc geninfo_unexecuted_blocks=1 00:27:15.593 00:27:15.593 ' 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:15.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.593 --rc genhtml_branch_coverage=1 00:27:15.593 --rc genhtml_function_coverage=1 00:27:15.593 --rc genhtml_legend=1 00:27:15.593 --rc geninfo_all_blocks=1 00:27:15.593 --rc geninfo_unexecuted_blocks=1 00:27:15.593 00:27:15.593 ' 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:15.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.593 --rc genhtml_branch_coverage=1 00:27:15.593 --rc genhtml_function_coverage=1 00:27:15.593 --rc genhtml_legend=1 00:27:15.593 --rc geninfo_all_blocks=1 00:27:15.593 --rc geninfo_unexecuted_blocks=1 00:27:15.593 00:27:15.593 ' 00:27:15.593 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:15.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.593 --rc genhtml_branch_coverage=1 00:27:15.593 --rc genhtml_function_coverage=1 00:27:15.593 --rc genhtml_legend=1 00:27:15.593 --rc geninfo_all_blocks=1 00:27:15.593 --rc geninfo_unexecuted_blocks=1 00:27:15.593 00:27:15.594 ' 00:27:15.594 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.594 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:15.594 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.594 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.594 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.594 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.594 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.594 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.594 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.594 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.594 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.594 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:15.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:27:15.856 21:20:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:23.999 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:23.999 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:23.999 Found net devices under 0000:31:00.0: cvl_0_0 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:23.999 Found net devices under 0000:31:00.1: cvl_0_1 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:23.999 21:20:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:23.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:27:23.999 00:27:23.999 --- 10.0.0.2 ping statistics --- 00:27:23.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.999 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:27:23.999 00:27:23.999 --- 10.0.0.1 ping statistics --- 00:27:23.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.999 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:23.999 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:24.000 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:24.000 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.000 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2230667 00:27:24.000 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2230667 00:27:24.000 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:24.000 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2230667 ']' 00:27:24.000 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.000 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:24.000 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.000 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:24.000 21:20:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.000 [2024-12-05 21:20:25.421807] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:27:24.000 [2024-12-05 21:20:25.421888] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.260 [2024-12-05 21:20:25.530996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.260 [2024-12-05 21:20:25.581021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.260 [2024-12-05 21:20:25.581074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.260 [2024-12-05 21:20:25.581083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.260 [2024-12-05 21:20:25.581090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.260 [2024-12-05 21:20:25.581096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.260 [2024-12-05 21:20:25.581901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.830 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:24.830 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:24.830 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:24.830 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:24.830 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.090 [2024-12-05 21:20:26.280776] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.090 [2024-12-05 21:20:26.293032] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.090 null0 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.090 null1 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.090 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.091 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2230832 00:27:25.091 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:25.091 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2230832 /tmp/host.sock 00:27:25.091 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2230832 ']' 00:27:25.091 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:25.091 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.091 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:25.091 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:25.091 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.091 21:20:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.091 [2024-12-05 21:20:26.392263] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:27:25.091 [2024-12-05 21:20:26.392329] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230832 ] 00:27:25.091 [2024-12-05 21:20:26.475829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.091 [2024-12-05 21:20:26.517797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:26.030 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.291 [2024-12-05 21:20:27.548139] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:26.291 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.292 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.555 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.555 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:27:26.555 21:20:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:26.815 [2024-12-05 21:20:28.228147] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:26.815 [2024-12-05 21:20:28.228167] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:26.815 [2024-12-05 21:20:28.228180] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:27.075 [2024-12-05 21:20:28.354565] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:27.075 [2024-12-05 21:20:28.409462] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:27.075 [2024-12-05 21:20:28.410567] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2184190:1 started. 00:27:27.075 [2024-12-05 21:20:28.412195] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:27.075 [2024-12-05 21:20:28.412213] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:27.075 [2024-12-05 21:20:28.418220] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2184190 was disconnected and freed. delete nvme_qpair. 00:27:27.335 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:27.335 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:27.597 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.598 21:20:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:27.859 [2024-12-05 21:20:29.208772] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21845d0:1 started. 00:27:27.859 [2024-12-05 21:20:29.220200] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21845d0 was disconnected and freed. delete nvme_qpair. 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:27.859 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.120 [2024-12-05 21:20:29.300882] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:28.120 [2024-12-05 21:20:29.301430] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:28.120 [2024-12-05 21:20:29.301450] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:28.120 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:28.121 [2024-12-05 21:20:29.427224] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:28.121 21:20:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:28.121 [2024-12-05 21:20:29.485930] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:27:28.121 [2024-12-05 21:20:29.485966] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:28.121 [2024-12-05 21:20:29.485975] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:28.121 [2024-12-05 21:20:29.485984] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:29.063 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:29.063 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:29.063 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:29.063 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:29.063 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:29.063 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:29.063 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.063 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:29.063 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.063 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.331 [2024-12-05 21:20:30.576936] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:29.331 [2024-12-05 21:20:30.576958] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:29.331 [2024-12-05 21:20:30.584113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.331 [2024-12-05 21:20:30.584133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.331 [2024-12-05 21:20:30.584143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.331 [2024-12-05 21:20:30.584151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.331 [2024-12-05 21:20:30.584159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.331 [2024-12-05 21:20:30.584166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.331 [2024-12-05 21:20:30.584175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.331 [2024-12-05 21:20:30.584182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.331 [2024-12-05 21:20:30.584189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21547d0 is same with the state(6) to be set 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:29.331 [2024-12-05 21:20:30.594127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21547d0 (9): Bad file descriptor 00:27:29.331 [2024-12-05 21:20:30.604163] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:29.331 [2024-12-05 21:20:30.604175] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:29.331 [2024-12-05 21:20:30.604183] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:29.331 [2024-12-05 21:20:30.604192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:29.331 [2024-12-05 21:20:30.604210] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:29.331 [2024-12-05 21:20:30.604432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-05 21:20:30.604452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21547d0 with addr=10.0.0.2, port=4420 00:27:29.331 [2024-12-05 21:20:30.604467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21547d0 is same with the state(6) to be set 00:27:29.331 [2024-12-05 21:20:30.604480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21547d0 (9): Bad file descriptor 00:27:29.331 [2024-12-05 21:20:30.604491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:29.331 [2024-12-05 21:20:30.604497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:29.331 [2024-12-05 21:20:30.604505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:29.331 [2024-12-05 21:20:30.604512] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:29.331 [2024-12-05 21:20:30.604518] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:29.331 [2024-12-05 21:20:30.604523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.331 [2024-12-05 21:20:30.614241] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:29.331 [2024-12-05 21:20:30.614252] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:29.331 [2024-12-05 21:20:30.614257] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:29.331 [2024-12-05 21:20:30.614262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:29.331 [2024-12-05 21:20:30.614276] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:29.331 [2024-12-05 21:20:30.614586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-05 21:20:30.614598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21547d0 with addr=10.0.0.2, port=4420 00:27:29.331 [2024-12-05 21:20:30.614605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21547d0 is same with the state(6) to be set 00:27:29.331 [2024-12-05 21:20:30.614617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21547d0 (9): Bad file descriptor 00:27:29.331 [2024-12-05 21:20:30.614627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:29.331 [2024-12-05 21:20:30.614633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:29.331 [2024-12-05 21:20:30.614640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:29.331 [2024-12-05 21:20:30.614647] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:29.331 [2024-12-05 21:20:30.614651] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:29.331 [2024-12-05 21:20:30.614656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:29.331 [2024-12-05 21:20:30.624307] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:29.331 [2024-12-05 21:20:30.624322] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:29.331 [2024-12-05 21:20:30.624327] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:29.331 [2024-12-05 21:20:30.624331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:29.331 [2024-12-05 21:20:30.624347] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:29.331 [2024-12-05 21:20:30.624658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.331 [2024-12-05 21:20:30.624674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21547d0 with addr=10.0.0.2, port=4420 00:27:29.331 [2024-12-05 21:20:30.624681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21547d0 is same with the state(6) to be set 00:27:29.331 [2024-12-05 21:20:30.624693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21547d0 (9): Bad file descriptor 00:27:29.331 [2024-12-05 21:20:30.624704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:29.331 [2024-12-05 21:20:30.624711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:29.331 [2024-12-05 21:20:30.624718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:29.331 [2024-12-05 21:20:30.624724] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:29.331 [2024-12-05 21:20:30.624729] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:29.331 [2024-12-05 21:20:30.624734] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:29.331 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:29.332 [2024-12-05 21:20:30.634378] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:29.332 [2024-12-05 21:20:30.634391] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:29.332 [2024-12-05 21:20:30.634396] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:29.332 [2024-12-05 21:20:30.634400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:29.332 [2024-12-05 21:20:30.634414] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:29.332 [2024-12-05 21:20:30.634702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-05 21:20:30.634713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21547d0 with addr=10.0.0.2, port=4420 00:27:29.332 [2024-12-05 21:20:30.634720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21547d0 is same with the state(6) to be set 00:27:29.332 [2024-12-05 21:20:30.634731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21547d0 (9): Bad file descriptor 00:27:29.332 [2024-12-05 21:20:30.634741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:29.332 [2024-12-05 21:20:30.634747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:29.332 [2024-12-05 21:20:30.634755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:29.332 [2024-12-05 21:20:30.634760] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:29.332 [2024-12-05 21:20:30.634765] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:29.332 [2024-12-05 21:20:30.634773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.332 [2024-12-05 21:20:30.644445] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:29.332 [2024-12-05 21:20:30.644458] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:29.332 [2024-12-05 21:20:30.644463] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:29.332 [2024-12-05 21:20:30.644468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:29.332 [2024-12-05 21:20:30.644483] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:29.332 [2024-12-05 21:20:30.644803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-05 21:20:30.644816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21547d0 with addr=10.0.0.2, port=4420 00:27:29.332 [2024-12-05 21:20:30.644823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21547d0 is same with the state(6) to be set 00:27:29.332 [2024-12-05 21:20:30.644834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21547d0 (9): Bad file descriptor 00:27:29.332 [2024-12-05 21:20:30.644844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:29.332 [2024-12-05 21:20:30.644851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:29.332 [2024-12-05 21:20:30.644858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:29.332 [2024-12-05 21:20:30.644870] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:29.332 [2024-12-05 21:20:30.644875] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:29.332 [2024-12-05 21:20:30.644879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:29.332 [2024-12-05 21:20:30.654512] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:29.332 [2024-12-05 21:20:30.654524] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:29.332 [2024-12-05 21:20:30.654529] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:29.332 [2024-12-05 21:20:30.654533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:29.332 [2024-12-05 21:20:30.654547] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:29.332 [2024-12-05 21:20:30.654834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-05 21:20:30.654845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21547d0 with addr=10.0.0.2, port=4420 00:27:29.332 [2024-12-05 21:20:30.654853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21547d0 is same with the state(6) to be set 00:27:29.332 [2024-12-05 21:20:30.654871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21547d0 (9): Bad file descriptor 00:27:29.332 [2024-12-05 21:20:30.654882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:29.332 [2024-12-05 21:20:30.654889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:29.332 [2024-12-05 21:20:30.654896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:29.332 [2024-12-05 21:20:30.654902] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:29.332 [2024-12-05 21:20:30.654907] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:29.332 [2024-12-05 21:20:30.654911] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:29.332 [2024-12-05 21:20:30.664579] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:29.332 [2024-12-05 21:20:30.664590] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:29.332 [2024-12-05 21:20:30.664594] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:29.332 [2024-12-05 21:20:30.664599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:29.332 [2024-12-05 21:20:30.664612] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:29.332 [2024-12-05 21:20:30.664906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.332 [2024-12-05 21:20:30.664918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21547d0 with addr=10.0.0.2, port=4420 00:27:29.332 [2024-12-05 21:20:30.664925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21547d0 is same with the state(6) to be set 00:27:29.332 [2024-12-05 21:20:30.664935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21547d0 (9): Bad file descriptor 00:27:29.332 [2024-12-05 21:20:30.664945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:29.332 [2024-12-05 21:20:30.664952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:29.332 [2024-12-05 21:20:30.664959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:29.332 [2024-12-05 21:20:30.664965] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:29.332 [2024-12-05 21:20:30.664970] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:29.332 [2024-12-05 21:20:30.664974] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:29.332 [2024-12-05 21:20:30.666006] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:29.332 [2024-12-05 21:20:30.666023] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.332 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.593 21:20:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.978 [2024-12-05 21:20:32.009795] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:30.978 [2024-12-05 21:20:32.009813] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:30.978 [2024-12-05 21:20:32.009825] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:30.978 [2024-12-05 21:20:32.098114] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:30.978 [2024-12-05 21:20:32.362492] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:27:30.978 [2024-12-05 21:20:32.363255] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x218f750:1 started. 00:27:30.978 [2024-12-05 21:20:32.365075] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:30.978 [2024-12-05 21:20:32.365103] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.978 request: 00:27:30.978 { 00:27:30.978 "name": "nvme", 00:27:30.978 "trtype": "tcp", 00:27:30.978 "traddr": "10.0.0.2", 00:27:30.978 "adrfam": "ipv4", 00:27:30.978 "trsvcid": "8009", 00:27:30.978 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:30.978 "wait_for_attach": true, 00:27:30.978 "method": "bdev_nvme_start_discovery", 00:27:30.978 "req_id": 1 00:27:30.978 } 00:27:30.978 Got JSON-RPC error response 00:27:30.978 response: 00:27:30.978 { 00:27:30.978 "code": -17, 00:27:30.978 "message": "File exists" 00:27:30.978 } 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:30.978 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.978 [2024-12-05 21:20:32.408647] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x218f750 was disconnected and freed. delete nvme_qpair. 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:31.239 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.240 request: 00:27:31.240 { 00:27:31.240 "name": "nvme_second", 00:27:31.240 "trtype": "tcp", 00:27:31.240 "traddr": "10.0.0.2", 00:27:31.240 "adrfam": "ipv4", 00:27:31.240 "trsvcid": "8009", 00:27:31.240 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:31.240 "wait_for_attach": true, 00:27:31.240 "method": "bdev_nvme_start_discovery", 00:27:31.240 "req_id": 1 00:27:31.240 } 00:27:31.240 Got JSON-RPC error response 00:27:31.240 response: 00:27:31.240 { 00:27:31.240 "code": -17, 00:27:31.240 "message": "File exists" 00:27:31.240 } 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.240 21:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.630 [2024-12-05 21:20:33.621168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.630 [2024-12-05 21:20:33.621196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21902c0 with addr=10.0.0.2, port=8010 00:27:32.630 [2024-12-05 21:20:33.621213] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:32.630 [2024-12-05 21:20:33.621221] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:32.630 [2024-12-05 21:20:33.621227] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:33.203 [2024-12-05 21:20:34.623478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.203 [2024-12-05 21:20:34.623501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21902c0 with addr=10.0.0.2, port=8010 00:27:33.203 [2024-12-05 21:20:34.623512] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:33.203 [2024-12-05 21:20:34.623519] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:33.203 [2024-12-05 21:20:34.623526] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:34.590 [2024-12-05 21:20:35.625485] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:34.590 request: 00:27:34.590 { 00:27:34.590 "name": "nvme_second", 00:27:34.590 "trtype": "tcp", 00:27:34.590 "traddr": "10.0.0.2", 00:27:34.590 "adrfam": "ipv4", 00:27:34.590 "trsvcid": "8010", 00:27:34.590 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:34.590 "wait_for_attach": false, 00:27:34.590 "attach_timeout_ms": 3000, 00:27:34.590 "method": "bdev_nvme_start_discovery", 00:27:34.590 "req_id": 1 00:27:34.590 } 00:27:34.590 Got JSON-RPC error response 00:27:34.590 response: 00:27:34.590 { 00:27:34.590 "code": -110, 00:27:34.590 "message": "Connection timed out" 00:27:34.590 } 00:27:34.590 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:34.590 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:34.590 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:34.590 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:34.590 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:34.590 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:34.590 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:34.590 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:34.590 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.590 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:34.590 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.590 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2230832 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:34.591 rmmod nvme_tcp 00:27:34.591 rmmod nvme_fabrics 00:27:34.591 rmmod nvme_keyring 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2230667 ']' 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2230667 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2230667 ']' 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2230667 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2230667 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2230667' 00:27:34.591 killing process with pid 2230667 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2230667 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2230667 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.591 21:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:37.141 00:27:37.141 real 0m21.190s 00:27:37.141 user 0m23.914s 00:27:37.141 sys 0m7.761s 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:37.141 ************************************ 00:27:37.141 END TEST nvmf_host_discovery 00:27:37.141 ************************************ 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.141 ************************************ 00:27:37.141 START TEST nvmf_host_multipath_status 00:27:37.141 ************************************ 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:37.141 * Looking for test storage... 00:27:37.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:27:37.141 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:37.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.142 --rc genhtml_branch_coverage=1 00:27:37.142 --rc genhtml_function_coverage=1 00:27:37.142 --rc genhtml_legend=1 00:27:37.142 --rc geninfo_all_blocks=1 00:27:37.142 --rc geninfo_unexecuted_blocks=1 00:27:37.142 00:27:37.142 ' 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:37.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.142 --rc genhtml_branch_coverage=1 00:27:37.142 --rc genhtml_function_coverage=1 00:27:37.142 --rc genhtml_legend=1 00:27:37.142 --rc geninfo_all_blocks=1 00:27:37.142 --rc geninfo_unexecuted_blocks=1 00:27:37.142 00:27:37.142 ' 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:37.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.142 --rc genhtml_branch_coverage=1 00:27:37.142 --rc genhtml_function_coverage=1 00:27:37.142 --rc genhtml_legend=1 00:27:37.142 --rc geninfo_all_blocks=1 00:27:37.142 --rc geninfo_unexecuted_blocks=1 00:27:37.142 00:27:37.142 ' 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:37.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.142 --rc genhtml_branch_coverage=1 00:27:37.142 --rc genhtml_function_coverage=1 00:27:37.142 --rc genhtml_legend=1 00:27:37.142 --rc geninfo_all_blocks=1 00:27:37.142 --rc geninfo_unexecuted_blocks=1 00:27:37.142 00:27:37.142 ' 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:37.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:37.142 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:27:37.143 21:20:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:45.292 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:45.292 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:45.292 Found net devices under 0000:31:00.0: cvl_0_0 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:45.292 Found net devices under 0000:31:00.1: cvl_0_1 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:45.292 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:27:45.552 00:27:45.552 --- 10.0.0.2 ping statistics --- 00:27:45.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.552 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:27:45.552 00:27:45.552 --- 10.0.0.1 ping statistics --- 00:27:45.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.552 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2237561 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2237561 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2237561 ']' 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.552 21:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:45.552 [2024-12-05 21:20:46.926429] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:27:45.552 [2024-12-05 21:20:46.926497] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.814 [2024-12-05 21:20:47.016391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:45.814 [2024-12-05 21:20:47.057017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.814 [2024-12-05 21:20:47.057055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.814 [2024-12-05 21:20:47.057063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.814 [2024-12-05 21:20:47.057069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.814 [2024-12-05 21:20:47.057075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.814 [2024-12-05 21:20:47.058351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.814 [2024-12-05 21:20:47.058354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.387 21:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:46.387 21:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:46.387 21:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:46.387 21:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:46.387 21:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:46.387 21:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.387 21:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2237561 00:27:46.387 21:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:46.649 [2024-12-05 21:20:47.905721] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.649 21:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:46.910 Malloc0 00:27:46.910 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:46.910 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:47.170 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.170 [2024-12-05 21:20:48.587230] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.170 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:47.431 [2024-12-05 21:20:48.755607] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:47.431 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:47.431 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2237928 00:27:47.431 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:47.431 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2237928 /var/tmp/bdevperf.sock 00:27:47.431 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2237928 ']' 00:27:47.431 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:47.431 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.431 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:47.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:47.431 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.431 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:47.691 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.691 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:47.691 21:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:47.951 21:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:48.211 Nvme0n1 00:27:48.211 21:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:48.783 Nvme0n1 00:27:48.783 21:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:48.783 21:20:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:50.695 21:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:50.695 21:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:50.955 21:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:50.955 21:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:51.896 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:51.896 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:51.896 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.155 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:52.155 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:52.155 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:52.155 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.155 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:52.414 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:52.414 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:52.414 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.414 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:52.673 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:52.673 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:52.673 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.673 21:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:52.673 21:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:52.673 21:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:52.673 21:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.673 21:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:52.933 21:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:52.933 21:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:52.933 21:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:52.933 21:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:53.194 21:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.194 21:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:53.194 21:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:53.194 21:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:53.454 21:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:54.393 21:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:54.393 21:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:54.393 21:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.393 21:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:54.653 21:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:54.653 21:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:54.653 21:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.653 21:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:54.972 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.972 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:54.972 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.972 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:54.972 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.972 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:54.972 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.972 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:55.256 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.256 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:55.256 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.256 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:55.257 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.257 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:55.257 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:55.257 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:55.517 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:55.517 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:55.517 21:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:55.778 21:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:56.040 21:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:56.986 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:56.986 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:56.986 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.986 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:56.986 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.986 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:56.986 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.986 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:57.248 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:57.248 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:57.248 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.248 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:57.510 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.510 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:57.510 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.510 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:57.771 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.771 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:57.771 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.771 21:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:57.771 21:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.771 21:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:57.771 21:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:57.771 21:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:58.033 21:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.033 21:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:58.033 21:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:58.295 21:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:58.295 21:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:59.683 21:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:59.683 21:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:59.683 21:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.683 21:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:59.683 21:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.683 21:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:59.683 21:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.683 21:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:59.683 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:59.683 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:59.683 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.683 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:59.944 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.944 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:59.944 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.944 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:00.206 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.206 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:00.206 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.206 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:00.206 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:00.206 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:00.206 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:00.206 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:00.468 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:00.468 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:00.468 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:00.730 21:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:00.730 21:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:01.674 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:01.674 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:01.674 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.674 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:01.935 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:01.935 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:01.935 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.935 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:02.197 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:02.197 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:02.197 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.197 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:02.197 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.459 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:02.459 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.459 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:02.459 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.459 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:02.459 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.459 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:02.720 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:02.720 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:02.720 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.720 21:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:02.981 21:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:02.981 21:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:02.981 21:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:02.981 21:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:03.242 21:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:04.193 21:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:04.193 21:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:04.193 21:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.193 21:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:04.453 21:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:04.453 21:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:04.453 21:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.453 21:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:04.713 21:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.713 21:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:04.713 21:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.713 21:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:04.713 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.713 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:04.713 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.713 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:04.974 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:04.974 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:04.974 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:04.974 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:05.235 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:05.235 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:05.235 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:05.235 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:05.235 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:05.235 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:05.496 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:05.496 21:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:05.758 21:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:05.758 21:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:07.143 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:07.143 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:07.143 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.143 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:07.143 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.143 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:07.143 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.143 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:07.143 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.143 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:07.143 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:07.143 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.404 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.404 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:07.404 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.404 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:07.664 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.664 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:07.664 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.664 21:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:07.664 21:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.664 21:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:07.664 21:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:07.664 21:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:07.924 21:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:07.924 21:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:07.924 21:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:08.185 21:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:08.185 21:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:09.569 21:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:09.569 21:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:09.569 21:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.569 21:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:09.569 21:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:09.569 21:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:09.569 21:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.569 21:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:09.569 21:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.569 21:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:09.569 21:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.569 21:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:09.830 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:09.830 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:09.830 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:09.830 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:10.090 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.090 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:10.090 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.090 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:10.351 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.351 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:10.351 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:10.351 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:10.351 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:10.351 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:10.351 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:10.610 21:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:10.869 21:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:11.809 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:11.809 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:11.809 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:11.809 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:12.070 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.070 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:12.070 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:12.070 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.070 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.070 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:12.330 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.330 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:12.330 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.330 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:12.330 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.330 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:12.591 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.591 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:12.591 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.591 21:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:12.852 21:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.852 21:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:12.852 21:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:12.852 21:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:12.852 21:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:12.852 21:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:12.853 21:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:13.114 21:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:13.374 21:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:14.317 21:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:14.317 21:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:14.317 21:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.317 21:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:14.576 21:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.576 21:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:14.576 21:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.576 21:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:14.576 21:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:14.576 21:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:14.576 21:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.576 21:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:14.836 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:14.836 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:14.836 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:14.836 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:15.095 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:15.095 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:15.095 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:15.095 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:15.095 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:15.095 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:15.095 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:15.095 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:15.355 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:15.355 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2237928 00:28:15.355 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2237928 ']' 00:28:15.355 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2237928 00:28:15.355 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:15.355 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:15.355 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2237928 00:28:15.355 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:15.355 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:15.355 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2237928' 00:28:15.355 killing process with pid 2237928 00:28:15.355 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2237928 00:28:15.355 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2237928 00:28:15.619 { 00:28:15.619 "results": [ 00:28:15.619 { 00:28:15.619 "job": "Nvme0n1", 00:28:15.619 "core_mask": "0x4", 00:28:15.619 "workload": "verify", 00:28:15.619 "status": "terminated", 00:28:15.619 "verify_range": { 00:28:15.619 "start": 0, 00:28:15.619 "length": 16384 00:28:15.619 }, 00:28:15.619 "queue_depth": 128, 00:28:15.619 "io_size": 4096, 00:28:15.619 "runtime": 26.719817, 00:28:15.619 "iops": 10818.075587867987, 00:28:15.619 "mibps": 42.25810776510932, 00:28:15.619 "io_failed": 0, 00:28:15.619 "io_timeout": 0, 00:28:15.619 "avg_latency_us": 11795.054126487163, 00:28:15.619 "min_latency_us": 192.0, 00:28:15.619 "max_latency_us": 3019898.88 00:28:15.619 } 00:28:15.619 ], 00:28:15.619 "core_count": 1 00:28:15.619 } 00:28:15.619 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2237928 00:28:15.619 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:15.619 [2024-12-05 21:20:48.805072] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:28:15.619 [2024-12-05 21:20:48.805134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237928 ] 00:28:15.619 [2024-12-05 21:20:48.869858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.619 [2024-12-05 21:20:48.898718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:15.619 Running I/O for 90 seconds... 00:28:15.619 9636.00 IOPS, 37.64 MiB/s [2024-12-05T20:21:17.056Z] 9669.00 IOPS, 37.77 MiB/s [2024-12-05T20:21:17.056Z] 9668.67 IOPS, 37.77 MiB/s [2024-12-05T20:21:17.056Z] 9666.25 IOPS, 37.76 MiB/s [2024-12-05T20:21:17.056Z] 9930.20 IOPS, 38.79 MiB/s [2024-12-05T20:21:17.056Z] 10450.17 IOPS, 40.82 MiB/s [2024-12-05T20:21:17.056Z] 10823.00 IOPS, 42.28 MiB/s [2024-12-05T20:21:17.056Z] 10766.12 IOPS, 42.06 MiB/s [2024-12-05T20:21:17.056Z] 10652.89 IOPS, 41.61 MiB/s [2024-12-05T20:21:17.056Z] 10560.50 IOPS, 41.25 MiB/s [2024-12-05T20:21:17.056Z] 10479.18 IOPS, 40.93 MiB/s [2024-12-05T20:21:17.056Z] [2024-12-05 21:21:01.909796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.619 [2024-12-05 21:21:01.909831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:15.619 [2024-12-05 21:21:01.909871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.619 [2024-12-05 21:21:01.909879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:15.619 [2024-12-05 21:21:01.909890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.619 [2024-12-05 21:21:01.909896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:15.619 [2024-12-05 21:21:01.909906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.619 [2024-12-05 21:21:01.909912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:15.619 [2024-12-05 21:21:01.909922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.619 [2024-12-05 21:21:01.909927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:15.619 [2024-12-05 21:21:01.909938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.619 [2024-12-05 21:21:01.909943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:15.619 [2024-12-05 21:21:01.909953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.619 [2024-12-05 21:21:01.909959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:15.619 [2024-12-05 21:21:01.909969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.619 [2024-12-05 21:21:01.909975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:15.619 [2024-12-05 21:21:01.910473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.619 [2024-12-05 21:21:01.910484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:15.619 [2024-12-05 21:21:01.910497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.619 [2024-12-05 21:21:01.910508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:15.619 [2024-12-05 21:21:01.910519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.619 [2024-12-05 21:21:01.910525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.620 [2024-12-05 21:21:01.910736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.620 [2024-12-05 21:21:01.910753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.910987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.910998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.911003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.911014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.911019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.911030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.911036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.911047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.911052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.911063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.911068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.911079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.911085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.911096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.911101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.911112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.911118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.911129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.911134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.911145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.911150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:15.620 [2024-12-05 21:21:01.911161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.620 [2024-12-05 21:21:01.911166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.911177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.911182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.911193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.911198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.911209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.911214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.911225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.911230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.911241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.911246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.911257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.911262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.911273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.911278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.911289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.911294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:15.621 [2024-12-05 21:21:01.912616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.621 [2024-12-05 21:21:01.912621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.912641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.622 [2024-12-05 21:21:01.912660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.622 [2024-12-05 21:21:01.912681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.622 [2024-12-05 21:21:01.912700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.622 [2024-12-05 21:21:01.912720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.622 [2024-12-05 21:21:01.912739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.622 [2024-12-05 21:21:01.912759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.622 [2024-12-05 21:21:01.912779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.912799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.912819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.912839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.912858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.912882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.912901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.912921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.912940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.912959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.912978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.912992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.912998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:01.913384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:01.913389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:15.622 10300.33 IOPS, 40.24 MiB/s [2024-12-05T20:21:17.059Z] 9508.00 IOPS, 37.14 MiB/s [2024-12-05T20:21:17.059Z] 8828.86 IOPS, 34.49 MiB/s [2024-12-05T20:21:17.059Z] 8365.60 IOPS, 32.68 MiB/s [2024-12-05T20:21:17.059Z] 8648.12 IOPS, 33.78 MiB/s [2024-12-05T20:21:17.059Z] 8906.18 IOPS, 34.79 MiB/s [2024-12-05T20:21:17.059Z] 9355.72 IOPS, 36.55 MiB/s [2024-12-05T20:21:17.059Z] 9755.37 IOPS, 38.11 MiB/s [2024-12-05T20:21:17.059Z] 9992.35 IOPS, 39.03 MiB/s [2024-12-05T20:21:17.059Z] 10139.24 IOPS, 39.61 MiB/s [2024-12-05T20:21:17.059Z] 10275.09 IOPS, 40.14 MiB/s [2024-12-05T20:21:17.059Z] 10553.91 IOPS, 41.23 MiB/s [2024-12-05T20:21:17.059Z] 10820.12 IOPS, 42.27 MiB/s [2024-12-05T20:21:17.059Z] [2024-12-05 21:21:14.537357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.622 [2024-12-05 21:21:14.537394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:15.622 [2024-12-05 21:21:14.537425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.623 [2024-12-05 21:21:14.537431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:15.623 [2024-12-05 21:21:14.537549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.623 [2024-12-05 21:21:14.537557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:15.623 [2024-12-05 21:21:14.537568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.623 [2024-12-05 21:21:14.537574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:15.623 [2024-12-05 21:21:14.537584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.623 [2024-12-05 21:21:14.537589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:15.623 [2024-12-05 21:21:14.537600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.623 [2024-12-05 21:21:14.537605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:15.623 [2024-12-05 21:21:14.537616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.623 [2024-12-05 21:21:14.537621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:15.623 [2024-12-05 21:21:14.537637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:15.623 [2024-12-05 21:21:14.537642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:15.623 [2024-12-05 21:21:14.537787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.623 [2024-12-05 21:21:14.537795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:15.623 [2024-12-05 21:21:14.537806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.623 [2024-12-05 21:21:14.537813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:15.623 10908.68 IOPS, 42.61 MiB/s [2024-12-05T20:21:17.060Z] 10864.96 IOPS, 42.44 MiB/s [2024-12-05T20:21:17.060Z] Received shutdown signal, test time was about 26.720428 seconds 00:28:15.623 00:28:15.623 Latency(us) 00:28:15.623 [2024-12-05T20:21:17.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.623 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:15.623 Verification LBA range: start 0x0 length 0x4000 00:28:15.623 Nvme0n1 : 26.72 10818.08 42.26 0.00 0.00 11795.05 192.00 3019898.88 00:28:15.623 [2024-12-05T20:21:17.060Z] =================================================================================================================== 00:28:15.623 [2024-12-05T20:21:17.060Z] Total : 10818.08 42.26 0.00 0.00 11795.05 192.00 3019898.88 00:28:15.623 21:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:15.884 rmmod nvme_tcp 00:28:15.884 rmmod nvme_fabrics 00:28:15.884 rmmod nvme_keyring 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2237561 ']' 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2237561 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2237561 ']' 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2237561 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2237561 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2237561' 00:28:15.884 killing process with pid 2237561 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2237561 00:28:15.884 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2237561 00:28:16.145 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:16.145 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:16.145 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:16.145 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:28:16.145 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:28:16.145 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:16.145 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:28:16.145 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:16.145 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:16.145 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.145 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.145 21:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.058 21:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:18.058 00:28:18.058 real 0m41.351s 00:28:18.058 user 1m43.788s 00:28:18.058 sys 0m12.477s 00:28:18.058 21:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:18.058 21:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:18.058 ************************************ 00:28:18.058 END TEST nvmf_host_multipath_status 00:28:18.058 ************************************ 00:28:18.058 21:21:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:18.058 21:21:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:18.058 21:21:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:18.058 21:21:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.319 ************************************ 00:28:18.319 START TEST nvmf_discovery_remove_ifc 00:28:18.319 ************************************ 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:18.319 * Looking for test storage... 00:28:18.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:18.319 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:18.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.320 --rc genhtml_branch_coverage=1 00:28:18.320 --rc genhtml_function_coverage=1 00:28:18.320 --rc genhtml_legend=1 00:28:18.320 --rc geninfo_all_blocks=1 00:28:18.320 --rc geninfo_unexecuted_blocks=1 00:28:18.320 00:28:18.320 ' 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:18.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.320 --rc genhtml_branch_coverage=1 00:28:18.320 --rc genhtml_function_coverage=1 00:28:18.320 --rc genhtml_legend=1 00:28:18.320 --rc geninfo_all_blocks=1 00:28:18.320 --rc geninfo_unexecuted_blocks=1 00:28:18.320 00:28:18.320 ' 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:18.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.320 --rc genhtml_branch_coverage=1 00:28:18.320 --rc genhtml_function_coverage=1 00:28:18.320 --rc genhtml_legend=1 00:28:18.320 --rc geninfo_all_blocks=1 00:28:18.320 --rc geninfo_unexecuted_blocks=1 00:28:18.320 00:28:18.320 ' 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:18.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.320 --rc genhtml_branch_coverage=1 00:28:18.320 --rc genhtml_function_coverage=1 00:28:18.320 --rc genhtml_legend=1 00:28:18.320 --rc geninfo_all_blocks=1 00:28:18.320 --rc geninfo_unexecuted_blocks=1 00:28:18.320 00:28:18.320 ' 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.320 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:18.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.581 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.582 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:18.582 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:18.582 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:28:18.582 21:21:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:26.717 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.717 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:26.718 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:26.718 Found net devices under 0000:31:00.0: cvl_0_0 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:26.718 Found net devices under 0000:31:00.1: cvl_0_1 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.718 21:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.718 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.718 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.718 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:26.718 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.718 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.718 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.718 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:26.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:28:26.979 00:28:26.979 --- 10.0.0.2 ping statistics --- 00:28:26.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.979 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:28:26.979 00:28:26.979 --- 10.0.0.1 ping statistics --- 00:28:26.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.979 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2248884 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2248884 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2248884 ']' 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.979 21:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:26.979 [2024-12-05 21:21:28.277179] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:28:26.979 [2024-12-05 21:21:28.277241] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.979 [2024-12-05 21:21:28.380149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.240 [2024-12-05 21:21:28.426778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.240 [2024-12-05 21:21:28.426833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.240 [2024-12-05 21:21:28.426842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.240 [2024-12-05 21:21:28.426855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.240 [2024-12-05 21:21:28.426875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.240 [2024-12-05 21:21:28.427663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.812 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:27.812 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:27.812 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:27.812 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:27.812 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:27.812 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.812 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:27.813 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.813 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:27.813 [2024-12-05 21:21:29.170454] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.813 [2024-12-05 21:21:29.178695] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:27.813 null0 00:28:27.813 [2024-12-05 21:21:29.210663] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.813 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.813 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2249074 00:28:27.813 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2249074 /tmp/host.sock 00:28:27.813 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:27.813 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2249074 ']' 00:28:27.813 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:27.813 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.813 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:27.813 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:27.813 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.813 21:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:28.074 [2024-12-05 21:21:29.298021] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:28:28.074 [2024-12-05 21:21:29.298087] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249074 ] 00:28:28.074 [2024-12-05 21:21:29.380400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.074 [2024-12-05 21:21:29.421757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.017 21:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.017 21:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:29.017 21:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:29.017 21:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:29.017 21:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.017 21:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:29.017 21:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.017 21:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:29.017 21:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.017 21:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:29.017 21:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.017 21:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:29.017 21:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.017 21:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:29.966 [2024-12-05 21:21:31.239957] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:29.966 [2024-12-05 21:21:31.239979] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:29.966 [2024-12-05 21:21:31.239993] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:29.966 [2024-12-05 21:21:31.366377] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:30.227 [2024-12-05 21:21:31.549569] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:30.227 [2024-12-05 21:21:31.550623] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1be8110:1 started. 00:28:30.227 [2024-12-05 21:21:31.552245] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:30.227 [2024-12-05 21:21:31.552292] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:30.227 [2024-12-05 21:21:31.552314] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:30.227 [2024-12-05 21:21:31.552328] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:30.227 [2024-12-05 21:21:31.552349] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:30.227 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.227 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:30.227 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:30.227 [2024-12-05 21:21:31.558526] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1be8110 was disconnected and freed. delete nvme_qpair. 00:28:30.227 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:30.227 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:30.227 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.227 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:30.227 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:30.227 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:30.227 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.227 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:30.227 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:30.227 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:30.487 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:30.488 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:30.488 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:30.488 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:30.488 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.488 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:30.488 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:30.488 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:30.488 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.488 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:30.488 21:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:31.431 21:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:31.431 21:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:31.431 21:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:31.431 21:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:31.431 21:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.431 21:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:31.431 21:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:31.431 21:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.431 21:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:31.431 21:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:32.816 21:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:32.816 21:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:32.816 21:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:32.816 21:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.816 21:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:32.816 21:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:32.816 21:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:32.816 21:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.816 21:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:32.816 21:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:33.778 21:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:33.778 21:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.778 21:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:33.778 21:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.778 21:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:33.778 21:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:33.778 21:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:33.778 21:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.778 21:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:33.778 21:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:34.726 21:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:34.726 21:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:34.726 21:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:34.726 21:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.726 21:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:34.726 21:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:34.726 21:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:34.726 21:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.726 21:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:34.726 21:21:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:35.669 [2024-12-05 21:21:36.992818] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:35.669 [2024-12-05 21:21:36.992858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.669 [2024-12-05 21:21:36.992875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.669 [2024-12-05 21:21:36.992885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.669 [2024-12-05 21:21:36.992893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.669 [2024-12-05 21:21:36.992904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.669 [2024-12-05 21:21:36.992912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.669 [2024-12-05 21:21:36.992920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.669 [2024-12-05 21:21:36.992928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.669 [2024-12-05 21:21:36.992936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.669 [2024-12-05 21:21:36.992944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.669 [2024-12-05 21:21:36.992952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4b20 is same with the state(6) to be set 00:28:35.669 [2024-12-05 21:21:37.002839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc4b20 (9): Bad file descriptor 00:28:35.669 [2024-12-05 21:21:37.012879] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:35.669 [2024-12-05 21:21:37.012892] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:35.669 [2024-12-05 21:21:37.012900] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:35.669 [2024-12-05 21:21:37.012905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:35.669 [2024-12-05 21:21:37.012926] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:35.669 21:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:35.669 21:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:35.669 21:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.670 21:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:35.670 21:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.670 21:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:35.670 21:21:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:37.057 [2024-12-05 21:21:38.075891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:37.057 [2024-12-05 21:21:38.075934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc4b20 with addr=10.0.0.2, port=4420 00:28:37.057 [2024-12-05 21:21:38.075948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4b20 is same with the state(6) to be set 00:28:37.057 [2024-12-05 21:21:38.075975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc4b20 (9): Bad file descriptor 00:28:37.057 [2024-12-05 21:21:38.076353] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:28:37.057 [2024-12-05 21:21:38.076378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:37.057 [2024-12-05 21:21:38.076387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:37.057 [2024-12-05 21:21:38.076396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:37.057 [2024-12-05 21:21:38.076404] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:37.057 [2024-12-05 21:21:38.076411] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:37.057 [2024-12-05 21:21:38.076416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:37.057 [2024-12-05 21:21:38.076424] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:37.058 [2024-12-05 21:21:38.076429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:37.058 21:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.058 21:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:37.058 21:21:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:38.001 [2024-12-05 21:21:39.078800] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:38.001 [2024-12-05 21:21:39.078819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:38.001 [2024-12-05 21:21:39.078832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:38.001 [2024-12-05 21:21:39.078844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:38.001 [2024-12-05 21:21:39.078851] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:28:38.001 [2024-12-05 21:21:39.078858] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:38.001 [2024-12-05 21:21:39.078867] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:38.001 [2024-12-05 21:21:39.078872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:38.001 [2024-12-05 21:21:39.078894] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:38.001 [2024-12-05 21:21:39.078916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.001 [2024-12-05 21:21:39.078926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.001 [2024-12-05 21:21:39.078937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.001 [2024-12-05 21:21:39.078945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.001 [2024-12-05 21:21:39.078953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.001 [2024-12-05 21:21:39.078961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.001 [2024-12-05 21:21:39.078969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.001 [2024-12-05 21:21:39.078977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.001 [2024-12-05 21:21:39.078985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.001 [2024-12-05 21:21:39.078992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.001 [2024-12-05 21:21:39.079001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:28:38.001 [2024-12-05 21:21:39.079208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb3e60 (9): Bad file descriptor 00:28:38.001 [2024-12-05 21:21:39.080220] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:38.001 [2024-12-05 21:21:39.080231] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:38.001 21:21:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:38.946 21:21:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:38.946 21:21:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.946 21:21:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:38.946 21:21:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.946 21:21:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:38.946 21:21:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:38.946 21:21:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:38.946 21:21:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.946 21:21:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:38.946 21:21:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:39.888 [2024-12-05 21:21:41.136039] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:39.889 [2024-12-05 21:21:41.136062] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:39.889 [2024-12-05 21:21:41.136076] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:39.889 [2024-12-05 21:21:41.262446] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:39.889 [2024-12-05 21:21:41.321945] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:28:39.889 [2024-12-05 21:21:41.322686] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1bc7000:1 started. 00:28:39.889 [2024-12-05 21:21:41.323963] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:39.889 [2024-12-05 21:21:41.323999] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:39.889 [2024-12-05 21:21:41.324020] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:39.889 [2024-12-05 21:21:41.324034] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:39.889 [2024-12-05 21:21:41.324042] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:40.150 [2024-12-05 21:21:41.332609] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1bc7000 was disconnected and freed. delete nvme_qpair. 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2249074 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2249074 ']' 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2249074 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2249074 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2249074' 00:28:40.150 killing process with pid 2249074 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2249074 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2249074 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:40.150 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:40.411 rmmod nvme_tcp 00:28:40.411 rmmod nvme_fabrics 00:28:40.411 rmmod nvme_keyring 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2248884 ']' 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2248884 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2248884 ']' 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2248884 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2248884 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2248884' 00:28:40.411 killing process with pid 2248884 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2248884 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2248884 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.411 21:21:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.961 21:21:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:42.961 00:28:42.961 real 0m24.366s 00:28:42.961 user 0m27.586s 00:28:42.961 sys 0m7.901s 00:28:42.961 21:21:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:42.961 21:21:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:42.961 ************************************ 00:28:42.961 END TEST nvmf_discovery_remove_ifc 00:28:42.961 ************************************ 00:28:42.961 21:21:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:42.961 21:21:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:42.961 21:21:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.961 21:21:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.961 ************************************ 00:28:42.961 START TEST nvmf_identify_kernel_target 00:28:42.961 ************************************ 00:28:42.961 21:21:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:42.961 * Looking for test storage... 00:28:42.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:42.961 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:42.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.962 --rc genhtml_branch_coverage=1 00:28:42.962 --rc genhtml_function_coverage=1 00:28:42.962 --rc genhtml_legend=1 00:28:42.962 --rc geninfo_all_blocks=1 00:28:42.962 --rc geninfo_unexecuted_blocks=1 00:28:42.962 00:28:42.962 ' 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:42.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.962 --rc genhtml_branch_coverage=1 00:28:42.962 --rc genhtml_function_coverage=1 00:28:42.962 --rc genhtml_legend=1 00:28:42.962 --rc geninfo_all_blocks=1 00:28:42.962 --rc geninfo_unexecuted_blocks=1 00:28:42.962 00:28:42.962 ' 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:42.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.962 --rc genhtml_branch_coverage=1 00:28:42.962 --rc genhtml_function_coverage=1 00:28:42.962 --rc genhtml_legend=1 00:28:42.962 --rc geninfo_all_blocks=1 00:28:42.962 --rc geninfo_unexecuted_blocks=1 00:28:42.962 00:28:42.962 ' 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:42.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.962 --rc genhtml_branch_coverage=1 00:28:42.962 --rc genhtml_function_coverage=1 00:28:42.962 --rc genhtml_legend=1 00:28:42.962 --rc geninfo_all_blocks=1 00:28:42.962 --rc geninfo_unexecuted_blocks=1 00:28:42.962 00:28:42.962 ' 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:42.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:42.962 21:21:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:51.275 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.275 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:51.275 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:51.275 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:51.275 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:51.275 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:51.275 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:51.275 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:51.275 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:51.275 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:51.275 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:51.276 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:51.276 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:51.276 Found net devices under 0000:31:00.0: cvl_0_0 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:51.276 Found net devices under 0000:31:00.1: cvl_0_1 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:51.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:28:51.276 00:28:51.276 --- 10.0.0.2 ping statistics --- 00:28:51.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.276 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:28:51.276 00:28:51.276 --- 10.0.0.1 ping statistics --- 00:28:51.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.276 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:51.276 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:51.277 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:51.537 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:51.537 21:21:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:55.746 Waiting for block devices as requested 00:28:55.746 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:55.746 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:55.746 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:55.746 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:55.746 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:55.746 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:55.746 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:55.746 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:55.746 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:56.008 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:56.008 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:56.269 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:56.269 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:56.269 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:56.269 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:56.531 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:56.531 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:56.793 No valid GPT data, bailing 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:56.793 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:56.794 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:56.794 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:56.794 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:56.794 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:56.794 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:56.794 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:56.794 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:56.794 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:56.794 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:57.057 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:28:57.057 00:28:57.057 Discovery Log Number of Records 2, Generation counter 2 00:28:57.057 =====Discovery Log Entry 0====== 00:28:57.057 trtype: tcp 00:28:57.057 adrfam: ipv4 00:28:57.057 subtype: current discovery subsystem 00:28:57.057 treq: not specified, sq flow control disable supported 00:28:57.057 portid: 1 00:28:57.057 trsvcid: 4420 00:28:57.057 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:57.057 traddr: 10.0.0.1 00:28:57.057 eflags: none 00:28:57.057 sectype: none 00:28:57.057 =====Discovery Log Entry 1====== 00:28:57.057 trtype: tcp 00:28:57.057 adrfam: ipv4 00:28:57.057 subtype: nvme subsystem 00:28:57.057 treq: not specified, sq flow control disable supported 00:28:57.057 portid: 1 00:28:57.057 trsvcid: 4420 00:28:57.057 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:57.057 traddr: 10.0.0.1 00:28:57.057 eflags: none 00:28:57.057 sectype: none 00:28:57.057 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:57.057 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:57.057 ===================================================== 00:28:57.057 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:57.057 ===================================================== 00:28:57.057 Controller Capabilities/Features 00:28:57.057 ================================ 00:28:57.057 Vendor ID: 0000 00:28:57.057 Subsystem Vendor ID: 0000 00:28:57.057 Serial Number: 25c1fcbb848e3000c5c5 00:28:57.057 Model Number: Linux 00:28:57.057 Firmware Version: 6.8.9-20 00:28:57.057 Recommended Arb Burst: 0 00:28:57.057 IEEE OUI Identifier: 00 00 00 00:28:57.057 Multi-path I/O 00:28:57.057 May have multiple subsystem ports: No 00:28:57.057 May have multiple controllers: No 00:28:57.057 Associated with SR-IOV VF: No 00:28:57.057 Max Data Transfer Size: Unlimited 00:28:57.057 Max Number of Namespaces: 0 00:28:57.057 Max Number of I/O Queues: 1024 00:28:57.057 NVMe Specification Version (VS): 1.3 00:28:57.057 NVMe Specification Version (Identify): 1.3 00:28:57.057 Maximum Queue Entries: 1024 00:28:57.057 Contiguous Queues Required: No 00:28:57.057 Arbitration Mechanisms Supported 00:28:57.057 Weighted Round Robin: Not Supported 00:28:57.057 Vendor Specific: Not Supported 00:28:57.057 Reset Timeout: 7500 ms 00:28:57.057 Doorbell Stride: 4 bytes 00:28:57.057 NVM Subsystem Reset: Not Supported 00:28:57.057 Command Sets Supported 00:28:57.057 NVM Command Set: Supported 00:28:57.057 Boot Partition: Not Supported 00:28:57.057 Memory Page Size Minimum: 4096 bytes 00:28:57.057 Memory Page Size Maximum: 4096 bytes 00:28:57.057 Persistent Memory Region: Not Supported 00:28:57.057 Optional Asynchronous Events Supported 00:28:57.057 Namespace Attribute Notices: Not Supported 00:28:57.057 Firmware Activation Notices: Not Supported 00:28:57.057 ANA Change Notices: Not Supported 00:28:57.057 PLE Aggregate Log Change Notices: Not Supported 00:28:57.057 LBA Status Info Alert Notices: Not Supported 00:28:57.057 EGE Aggregate Log Change Notices: Not Supported 00:28:57.057 Normal NVM Subsystem Shutdown event: Not Supported 00:28:57.057 Zone Descriptor Change Notices: Not Supported 00:28:57.057 Discovery Log Change Notices: Supported 00:28:57.057 Controller Attributes 00:28:57.057 128-bit Host Identifier: Not Supported 00:28:57.057 Non-Operational Permissive Mode: Not Supported 00:28:57.057 NVM Sets: Not Supported 00:28:57.057 Read Recovery Levels: Not Supported 00:28:57.057 Endurance Groups: Not Supported 00:28:57.057 Predictable Latency Mode: Not Supported 00:28:57.057 Traffic Based Keep ALive: Not Supported 00:28:57.057 Namespace Granularity: Not Supported 00:28:57.057 SQ Associations: Not Supported 00:28:57.057 UUID List: Not Supported 00:28:57.057 Multi-Domain Subsystem: Not Supported 00:28:57.057 Fixed Capacity Management: Not Supported 00:28:57.057 Variable Capacity Management: Not Supported 00:28:57.057 Delete Endurance Group: Not Supported 00:28:57.057 Delete NVM Set: Not Supported 00:28:57.057 Extended LBA Formats Supported: Not Supported 00:28:57.057 Flexible Data Placement Supported: Not Supported 00:28:57.057 00:28:57.057 Controller Memory Buffer Support 00:28:57.057 ================================ 00:28:57.057 Supported: No 00:28:57.057 00:28:57.057 Persistent Memory Region Support 00:28:57.057 ================================ 00:28:57.057 Supported: No 00:28:57.057 00:28:57.057 Admin Command Set Attributes 00:28:57.057 ============================ 00:28:57.057 Security Send/Receive: Not Supported 00:28:57.057 Format NVM: Not Supported 00:28:57.057 Firmware Activate/Download: Not Supported 00:28:57.057 Namespace Management: Not Supported 00:28:57.057 Device Self-Test: Not Supported 00:28:57.057 Directives: Not Supported 00:28:57.057 NVMe-MI: Not Supported 00:28:57.057 Virtualization Management: Not Supported 00:28:57.057 Doorbell Buffer Config: Not Supported 00:28:57.057 Get LBA Status Capability: Not Supported 00:28:57.057 Command & Feature Lockdown Capability: Not Supported 00:28:57.057 Abort Command Limit: 1 00:28:57.057 Async Event Request Limit: 1 00:28:57.057 Number of Firmware Slots: N/A 00:28:57.057 Firmware Slot 1 Read-Only: N/A 00:28:57.057 Firmware Activation Without Reset: N/A 00:28:57.057 Multiple Update Detection Support: N/A 00:28:57.058 Firmware Update Granularity: No Information Provided 00:28:57.058 Per-Namespace SMART Log: No 00:28:57.058 Asymmetric Namespace Access Log Page: Not Supported 00:28:57.058 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:57.058 Command Effects Log Page: Not Supported 00:28:57.058 Get Log Page Extended Data: Supported 00:28:57.058 Telemetry Log Pages: Not Supported 00:28:57.058 Persistent Event Log Pages: Not Supported 00:28:57.058 Supported Log Pages Log Page: May Support 00:28:57.058 Commands Supported & Effects Log Page: Not Supported 00:28:57.058 Feature Identifiers & Effects Log Page:May Support 00:28:57.058 NVMe-MI Commands & Effects Log Page: May Support 00:28:57.058 Data Area 4 for Telemetry Log: Not Supported 00:28:57.058 Error Log Page Entries Supported: 1 00:28:57.058 Keep Alive: Not Supported 00:28:57.058 00:28:57.058 NVM Command Set Attributes 00:28:57.058 ========================== 00:28:57.058 Submission Queue Entry Size 00:28:57.058 Max: 1 00:28:57.058 Min: 1 00:28:57.058 Completion Queue Entry Size 00:28:57.058 Max: 1 00:28:57.058 Min: 1 00:28:57.058 Number of Namespaces: 0 00:28:57.058 Compare Command: Not Supported 00:28:57.058 Write Uncorrectable Command: Not Supported 00:28:57.058 Dataset Management Command: Not Supported 00:28:57.058 Write Zeroes Command: Not Supported 00:28:57.058 Set Features Save Field: Not Supported 00:28:57.058 Reservations: Not Supported 00:28:57.058 Timestamp: Not Supported 00:28:57.058 Copy: Not Supported 00:28:57.058 Volatile Write Cache: Not Present 00:28:57.058 Atomic Write Unit (Normal): 1 00:28:57.058 Atomic Write Unit (PFail): 1 00:28:57.058 Atomic Compare & Write Unit: 1 00:28:57.058 Fused Compare & Write: Not Supported 00:28:57.058 Scatter-Gather List 00:28:57.058 SGL Command Set: Supported 00:28:57.058 SGL Keyed: Not Supported 00:28:57.058 SGL Bit Bucket Descriptor: Not Supported 00:28:57.058 SGL Metadata Pointer: Not Supported 00:28:57.058 Oversized SGL: Not Supported 00:28:57.058 SGL Metadata Address: Not Supported 00:28:57.058 SGL Offset: Supported 00:28:57.058 Transport SGL Data Block: Not Supported 00:28:57.058 Replay Protected Memory Block: Not Supported 00:28:57.058 00:28:57.058 Firmware Slot Information 00:28:57.058 ========================= 00:28:57.058 Active slot: 0 00:28:57.058 00:28:57.058 00:28:57.058 Error Log 00:28:57.058 ========= 00:28:57.058 00:28:57.058 Active Namespaces 00:28:57.058 ================= 00:28:57.058 Discovery Log Page 00:28:57.058 ================== 00:28:57.058 Generation Counter: 2 00:28:57.058 Number of Records: 2 00:28:57.058 Record Format: 0 00:28:57.058 00:28:57.058 Discovery Log Entry 0 00:28:57.058 ---------------------- 00:28:57.058 Transport Type: 3 (TCP) 00:28:57.058 Address Family: 1 (IPv4) 00:28:57.058 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:57.058 Entry Flags: 00:28:57.058 Duplicate Returned Information: 0 00:28:57.058 Explicit Persistent Connection Support for Discovery: 0 00:28:57.058 Transport Requirements: 00:28:57.058 Secure Channel: Not Specified 00:28:57.058 Port ID: 1 (0x0001) 00:28:57.058 Controller ID: 65535 (0xffff) 00:28:57.058 Admin Max SQ Size: 32 00:28:57.058 Transport Service Identifier: 4420 00:28:57.058 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:57.058 Transport Address: 10.0.0.1 00:28:57.058 Discovery Log Entry 1 00:28:57.058 ---------------------- 00:28:57.058 Transport Type: 3 (TCP) 00:28:57.058 Address Family: 1 (IPv4) 00:28:57.058 Subsystem Type: 2 (NVM Subsystem) 00:28:57.058 Entry Flags: 00:28:57.058 Duplicate Returned Information: 0 00:28:57.058 Explicit Persistent Connection Support for Discovery: 0 00:28:57.058 Transport Requirements: 00:28:57.058 Secure Channel: Not Specified 00:28:57.058 Port ID: 1 (0x0001) 00:28:57.058 Controller ID: 65535 (0xffff) 00:28:57.058 Admin Max SQ Size: 32 00:28:57.058 Transport Service Identifier: 4420 00:28:57.058 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:57.058 Transport Address: 10.0.0.1 00:28:57.058 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:57.058 get_feature(0x01) failed 00:28:57.058 get_feature(0x02) failed 00:28:57.058 get_feature(0x04) failed 00:28:57.058 ===================================================== 00:28:57.058 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:57.058 ===================================================== 00:28:57.058 Controller Capabilities/Features 00:28:57.058 ================================ 00:28:57.058 Vendor ID: 0000 00:28:57.058 Subsystem Vendor ID: 0000 00:28:57.058 Serial Number: 1c8532af7b0b1ca40916 00:28:57.058 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:57.058 Firmware Version: 6.8.9-20 00:28:57.058 Recommended Arb Burst: 6 00:28:57.058 IEEE OUI Identifier: 00 00 00 00:28:57.058 Multi-path I/O 00:28:57.058 May have multiple subsystem ports: Yes 00:28:57.058 May have multiple controllers: Yes 00:28:57.058 Associated with SR-IOV VF: No 00:28:57.058 Max Data Transfer Size: Unlimited 00:28:57.058 Max Number of Namespaces: 1024 00:28:57.058 Max Number of I/O Queues: 128 00:28:57.058 NVMe Specification Version (VS): 1.3 00:28:57.058 NVMe Specification Version (Identify): 1.3 00:28:57.058 Maximum Queue Entries: 1024 00:28:57.058 Contiguous Queues Required: No 00:28:57.058 Arbitration Mechanisms Supported 00:28:57.058 Weighted Round Robin: Not Supported 00:28:57.058 Vendor Specific: Not Supported 00:28:57.058 Reset Timeout: 7500 ms 00:28:57.058 Doorbell Stride: 4 bytes 00:28:57.058 NVM Subsystem Reset: Not Supported 00:28:57.058 Command Sets Supported 00:28:57.058 NVM Command Set: Supported 00:28:57.058 Boot Partition: Not Supported 00:28:57.058 Memory Page Size Minimum: 4096 bytes 00:28:57.058 Memory Page Size Maximum: 4096 bytes 00:28:57.058 Persistent Memory Region: Not Supported 00:28:57.058 Optional Asynchronous Events Supported 00:28:57.058 Namespace Attribute Notices: Supported 00:28:57.058 Firmware Activation Notices: Not Supported 00:28:57.058 ANA Change Notices: Supported 00:28:57.058 PLE Aggregate Log Change Notices: Not Supported 00:28:57.058 LBA Status Info Alert Notices: Not Supported 00:28:57.058 EGE Aggregate Log Change Notices: Not Supported 00:28:57.058 Normal NVM Subsystem Shutdown event: Not Supported 00:28:57.058 Zone Descriptor Change Notices: Not Supported 00:28:57.058 Discovery Log Change Notices: Not Supported 00:28:57.058 Controller Attributes 00:28:57.058 128-bit Host Identifier: Supported 00:28:57.058 Non-Operational Permissive Mode: Not Supported 00:28:57.058 NVM Sets: Not Supported 00:28:57.058 Read Recovery Levels: Not Supported 00:28:57.058 Endurance Groups: Not Supported 00:28:57.058 Predictable Latency Mode: Not Supported 00:28:57.059 Traffic Based Keep ALive: Supported 00:28:57.059 Namespace Granularity: Not Supported 00:28:57.059 SQ Associations: Not Supported 00:28:57.059 UUID List: Not Supported 00:28:57.059 Multi-Domain Subsystem: Not Supported 00:28:57.059 Fixed Capacity Management: Not Supported 00:28:57.059 Variable Capacity Management: Not Supported 00:28:57.059 Delete Endurance Group: Not Supported 00:28:57.059 Delete NVM Set: Not Supported 00:28:57.059 Extended LBA Formats Supported: Not Supported 00:28:57.059 Flexible Data Placement Supported: Not Supported 00:28:57.059 00:28:57.059 Controller Memory Buffer Support 00:28:57.059 ================================ 00:28:57.059 Supported: No 00:28:57.059 00:28:57.059 Persistent Memory Region Support 00:28:57.059 ================================ 00:28:57.059 Supported: No 00:28:57.059 00:28:57.059 Admin Command Set Attributes 00:28:57.059 ============================ 00:28:57.059 Security Send/Receive: Not Supported 00:28:57.059 Format NVM: Not Supported 00:28:57.059 Firmware Activate/Download: Not Supported 00:28:57.059 Namespace Management: Not Supported 00:28:57.059 Device Self-Test: Not Supported 00:28:57.059 Directives: Not Supported 00:28:57.059 NVMe-MI: Not Supported 00:28:57.059 Virtualization Management: Not Supported 00:28:57.059 Doorbell Buffer Config: Not Supported 00:28:57.059 Get LBA Status Capability: Not Supported 00:28:57.059 Command & Feature Lockdown Capability: Not Supported 00:28:57.059 Abort Command Limit: 4 00:28:57.059 Async Event Request Limit: 4 00:28:57.059 Number of Firmware Slots: N/A 00:28:57.059 Firmware Slot 1 Read-Only: N/A 00:28:57.059 Firmware Activation Without Reset: N/A 00:28:57.059 Multiple Update Detection Support: N/A 00:28:57.059 Firmware Update Granularity: No Information Provided 00:28:57.059 Per-Namespace SMART Log: Yes 00:28:57.059 Asymmetric Namespace Access Log Page: Supported 00:28:57.059 ANA Transition Time : 10 sec 00:28:57.059 00:28:57.059 Asymmetric Namespace Access Capabilities 00:28:57.059 ANA Optimized State : Supported 00:28:57.059 ANA Non-Optimized State : Supported 00:28:57.059 ANA Inaccessible State : Supported 00:28:57.059 ANA Persistent Loss State : Supported 00:28:57.059 ANA Change State : Supported 00:28:57.059 ANAGRPID is not changed : No 00:28:57.059 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:57.059 00:28:57.059 ANA Group Identifier Maximum : 128 00:28:57.059 Number of ANA Group Identifiers : 128 00:28:57.059 Max Number of Allowed Namespaces : 1024 00:28:57.059 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:57.059 Command Effects Log Page: Supported 00:28:57.059 Get Log Page Extended Data: Supported 00:28:57.059 Telemetry Log Pages: Not Supported 00:28:57.059 Persistent Event Log Pages: Not Supported 00:28:57.059 Supported Log Pages Log Page: May Support 00:28:57.059 Commands Supported & Effects Log Page: Not Supported 00:28:57.059 Feature Identifiers & Effects Log Page:May Support 00:28:57.059 NVMe-MI Commands & Effects Log Page: May Support 00:28:57.059 Data Area 4 for Telemetry Log: Not Supported 00:28:57.059 Error Log Page Entries Supported: 128 00:28:57.059 Keep Alive: Supported 00:28:57.059 Keep Alive Granularity: 1000 ms 00:28:57.059 00:28:57.059 NVM Command Set Attributes 00:28:57.059 ========================== 00:28:57.059 Submission Queue Entry Size 00:28:57.059 Max: 64 00:28:57.059 Min: 64 00:28:57.059 Completion Queue Entry Size 00:28:57.059 Max: 16 00:28:57.059 Min: 16 00:28:57.059 Number of Namespaces: 1024 00:28:57.059 Compare Command: Not Supported 00:28:57.059 Write Uncorrectable Command: Not Supported 00:28:57.059 Dataset Management Command: Supported 00:28:57.059 Write Zeroes Command: Supported 00:28:57.059 Set Features Save Field: Not Supported 00:28:57.059 Reservations: Not Supported 00:28:57.059 Timestamp: Not Supported 00:28:57.059 Copy: Not Supported 00:28:57.059 Volatile Write Cache: Present 00:28:57.059 Atomic Write Unit (Normal): 1 00:28:57.059 Atomic Write Unit (PFail): 1 00:28:57.059 Atomic Compare & Write Unit: 1 00:28:57.059 Fused Compare & Write: Not Supported 00:28:57.059 Scatter-Gather List 00:28:57.059 SGL Command Set: Supported 00:28:57.059 SGL Keyed: Not Supported 00:28:57.059 SGL Bit Bucket Descriptor: Not Supported 00:28:57.059 SGL Metadata Pointer: Not Supported 00:28:57.059 Oversized SGL: Not Supported 00:28:57.059 SGL Metadata Address: Not Supported 00:28:57.059 SGL Offset: Supported 00:28:57.059 Transport SGL Data Block: Not Supported 00:28:57.059 Replay Protected Memory Block: Not Supported 00:28:57.059 00:28:57.059 Firmware Slot Information 00:28:57.059 ========================= 00:28:57.059 Active slot: 0 00:28:57.059 00:28:57.059 Asymmetric Namespace Access 00:28:57.059 =========================== 00:28:57.059 Change Count : 0 00:28:57.059 Number of ANA Group Descriptors : 1 00:28:57.059 ANA Group Descriptor : 0 00:28:57.059 ANA Group ID : 1 00:28:57.059 Number of NSID Values : 1 00:28:57.059 Change Count : 0 00:28:57.059 ANA State : 1 00:28:57.059 Namespace Identifier : 1 00:28:57.059 00:28:57.059 Commands Supported and Effects 00:28:57.059 ============================== 00:28:57.059 Admin Commands 00:28:57.059 -------------- 00:28:57.059 Get Log Page (02h): Supported 00:28:57.059 Identify (06h): Supported 00:28:57.059 Abort (08h): Supported 00:28:57.059 Set Features (09h): Supported 00:28:57.059 Get Features (0Ah): Supported 00:28:57.059 Asynchronous Event Request (0Ch): Supported 00:28:57.059 Keep Alive (18h): Supported 00:28:57.059 I/O Commands 00:28:57.059 ------------ 00:28:57.059 Flush (00h): Supported 00:28:57.059 Write (01h): Supported LBA-Change 00:28:57.059 Read (02h): Supported 00:28:57.059 Write Zeroes (08h): Supported LBA-Change 00:28:57.059 Dataset Management (09h): Supported 00:28:57.059 00:28:57.059 Error Log 00:28:57.059 ========= 00:28:57.059 Entry: 0 00:28:57.059 Error Count: 0x3 00:28:57.059 Submission Queue Id: 0x0 00:28:57.059 Command Id: 0x5 00:28:57.059 Phase Bit: 0 00:28:57.059 Status Code: 0x2 00:28:57.059 Status Code Type: 0x0 00:28:57.059 Do Not Retry: 1 00:28:57.059 Error Location: 0x28 00:28:57.059 LBA: 0x0 00:28:57.059 Namespace: 0x0 00:28:57.060 Vendor Log Page: 0x0 00:28:57.060 ----------- 00:28:57.060 Entry: 1 00:28:57.060 Error Count: 0x2 00:28:57.060 Submission Queue Id: 0x0 00:28:57.060 Command Id: 0x5 00:28:57.060 Phase Bit: 0 00:28:57.060 Status Code: 0x2 00:28:57.060 Status Code Type: 0x0 00:28:57.060 Do Not Retry: 1 00:28:57.060 Error Location: 0x28 00:28:57.060 LBA: 0x0 00:28:57.060 Namespace: 0x0 00:28:57.060 Vendor Log Page: 0x0 00:28:57.060 ----------- 00:28:57.060 Entry: 2 00:28:57.060 Error Count: 0x1 00:28:57.060 Submission Queue Id: 0x0 00:28:57.060 Command Id: 0x4 00:28:57.060 Phase Bit: 0 00:28:57.060 Status Code: 0x2 00:28:57.060 Status Code Type: 0x0 00:28:57.060 Do Not Retry: 1 00:28:57.060 Error Location: 0x28 00:28:57.060 LBA: 0x0 00:28:57.060 Namespace: 0x0 00:28:57.060 Vendor Log Page: 0x0 00:28:57.060 00:28:57.060 Number of Queues 00:28:57.060 ================ 00:28:57.060 Number of I/O Submission Queues: 128 00:28:57.060 Number of I/O Completion Queues: 128 00:28:57.060 00:28:57.060 ZNS Specific Controller Data 00:28:57.060 ============================ 00:28:57.060 Zone Append Size Limit: 0 00:28:57.060 00:28:57.060 00:28:57.060 Active Namespaces 00:28:57.060 ================= 00:28:57.060 get_feature(0x05) failed 00:28:57.060 Namespace ID:1 00:28:57.060 Command Set Identifier: NVM (00h) 00:28:57.060 Deallocate: Supported 00:28:57.060 Deallocated/Unwritten Error: Not Supported 00:28:57.060 Deallocated Read Value: Unknown 00:28:57.060 Deallocate in Write Zeroes: Not Supported 00:28:57.060 Deallocated Guard Field: 0xFFFF 00:28:57.060 Flush: Supported 00:28:57.060 Reservation: Not Supported 00:28:57.060 Namespace Sharing Capabilities: Multiple Controllers 00:28:57.060 Size (in LBAs): 3750748848 (1788GiB) 00:28:57.060 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:57.060 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:57.060 UUID: 820ffdc0-82d9-4086-9357-94b5c41e7abd 00:28:57.060 Thin Provisioning: Not Supported 00:28:57.060 Per-NS Atomic Units: Yes 00:28:57.060 Atomic Write Unit (Normal): 8 00:28:57.060 Atomic Write Unit (PFail): 8 00:28:57.060 Preferred Write Granularity: 8 00:28:57.060 Atomic Compare & Write Unit: 8 00:28:57.060 Atomic Boundary Size (Normal): 0 00:28:57.060 Atomic Boundary Size (PFail): 0 00:28:57.060 Atomic Boundary Offset: 0 00:28:57.060 NGUID/EUI64 Never Reused: No 00:28:57.060 ANA group ID: 1 00:28:57.060 Namespace Write Protected: No 00:28:57.060 Number of LBA Formats: 1 00:28:57.060 Current LBA Format: LBA Format #00 00:28:57.060 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:57.060 00:28:57.060 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:57.060 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.060 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:57.060 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.060 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:57.060 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.060 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.060 rmmod nvme_tcp 00:28:57.322 rmmod nvme_fabrics 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.322 21:21:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.234 21:22:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.234 21:22:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:59.234 21:22:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:59.234 21:22:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:59.234 21:22:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:59.234 21:22:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:59.234 21:22:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:59.234 21:22:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:59.234 21:22:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:59.234 21:22:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:59.234 21:22:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:03.447 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:03.447 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:04.020 00:29:04.020 real 0m21.163s 00:29:04.020 user 0m5.899s 00:29:04.020 sys 0m12.385s 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:04.020 ************************************ 00:29:04.020 END TEST nvmf_identify_kernel_target 00:29:04.020 ************************************ 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.020 ************************************ 00:29:04.020 START TEST nvmf_auth_host 00:29:04.020 ************************************ 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:04.020 * Looking for test storage... 00:29:04.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:04.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.020 --rc genhtml_branch_coverage=1 00:29:04.020 --rc genhtml_function_coverage=1 00:29:04.020 --rc genhtml_legend=1 00:29:04.020 --rc geninfo_all_blocks=1 00:29:04.020 --rc geninfo_unexecuted_blocks=1 00:29:04.020 00:29:04.020 ' 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:04.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.020 --rc genhtml_branch_coverage=1 00:29:04.020 --rc genhtml_function_coverage=1 00:29:04.020 --rc genhtml_legend=1 00:29:04.020 --rc geninfo_all_blocks=1 00:29:04.020 --rc geninfo_unexecuted_blocks=1 00:29:04.020 00:29:04.020 ' 00:29:04.020 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:04.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.020 --rc genhtml_branch_coverage=1 00:29:04.020 --rc genhtml_function_coverage=1 00:29:04.020 --rc genhtml_legend=1 00:29:04.020 --rc geninfo_all_blocks=1 00:29:04.020 --rc geninfo_unexecuted_blocks=1 00:29:04.020 00:29:04.020 ' 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:04.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.021 --rc genhtml_branch_coverage=1 00:29:04.021 --rc genhtml_function_coverage=1 00:29:04.021 --rc genhtml_legend=1 00:29:04.021 --rc geninfo_all_blocks=1 00:29:04.021 --rc geninfo_unexecuted_blocks=1 00:29:04.021 00:29:04.021 ' 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.021 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:04.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:04.281 21:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.410 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.410 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.410 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.410 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.410 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.410 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.410 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.410 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.410 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.410 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:29:12.410 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.410 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:12.411 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:12.411 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:12.411 Found net devices under 0000:31:00.0: cvl_0_0 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:12.411 Found net devices under 0000:31:00.1: cvl_0_1 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:12.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:29:12.411 00:29:12.411 --- 10.0.0.2 ping statistics --- 00:29:12.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.411 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:12.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:29:12.411 00:29:12.411 --- 10.0.0.1 ping statistics --- 00:29:12.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.411 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:29:12.411 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2264859 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2264859 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2264859 ']' 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.671 21:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b6d5d17c448f841f76d22f950edb79d8 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yh7 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b6d5d17c448f841f76d22f950edb79d8 0 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b6d5d17c448f841f76d22f950edb79d8 0 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b6d5d17c448f841f76d22f950edb79d8 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yh7 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yh7 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.yh7 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3786f5ba9a47a3c9590b2b99631f34b655cbd9d2182d9aa641d2119fa662da95 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.lNN 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3786f5ba9a47a3c9590b2b99631f34b655cbd9d2182d9aa641d2119fa662da95 3 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3786f5ba9a47a3c9590b2b99631f34b655cbd9d2182d9aa641d2119fa662da95 3 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3786f5ba9a47a3c9590b2b99631f34b655cbd9d2182d9aa641d2119fa662da95 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.lNN 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.lNN 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.lNN 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:13.666 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e2deac17cbbecc6591429966ea425cb261f23feef1b7695d 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.YAJ 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e2deac17cbbecc6591429966ea425cb261f23feef1b7695d 0 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e2deac17cbbecc6591429966ea425cb261f23feef1b7695d 0 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e2deac17cbbecc6591429966ea425cb261f23feef1b7695d 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.YAJ 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.YAJ 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.YAJ 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e4844a7485e8f31825f9deeb7067ea96ce2ee3cd7b419318 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fhh 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e4844a7485e8f31825f9deeb7067ea96ce2ee3cd7b419318 2 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e4844a7485e8f31825f9deeb7067ea96ce2ee3cd7b419318 2 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e4844a7485e8f31825f9deeb7067ea96ce2ee3cd7b419318 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:13.667 21:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fhh 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fhh 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.fhh 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eb2df953e92bc0e3ec91c8956765867e 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.wwy 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eb2df953e92bc0e3ec91c8956765867e 1 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eb2df953e92bc0e3ec91c8956765867e 1 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eb2df953e92bc0e3ec91c8956765867e 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.wwy 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.wwy 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.wwy 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:13.667 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=83ec25ff196406386d57fed261f91ad7 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Vah 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 83ec25ff196406386d57fed261f91ad7 1 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 83ec25ff196406386d57fed261f91ad7 1 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=83ec25ff196406386d57fed261f91ad7 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Vah 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Vah 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Vah 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f3b2fef97034d5bcefb6e9287801c96a316ec05336265c7e 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.quf 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f3b2fef97034d5bcefb6e9287801c96a316ec05336265c7e 2 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f3b2fef97034d5bcefb6e9287801c96a316ec05336265c7e 2 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f3b2fef97034d5bcefb6e9287801c96a316ec05336265c7e 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.quf 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.quf 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.quf 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=487b9bae8b59fd332e17dda898ec8368 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4SO 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 487b9bae8b59fd332e17dda898ec8368 0 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 487b9bae8b59fd332e17dda898ec8368 0 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=487b9bae8b59fd332e17dda898ec8368 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:13.927 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4SO 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4SO 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.4SO 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3052c7ed628a2ca34ec710b7d72bea767b793988d72b6ff555151eb6756ac1ff 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.7qu 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3052c7ed628a2ca34ec710b7d72bea767b793988d72b6ff555151eb6756ac1ff 3 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3052c7ed628a2ca34ec710b7d72bea767b793988d72b6ff555151eb6756ac1ff 3 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3052c7ed628a2ca34ec710b7d72bea767b793988d72b6ff555151eb6756ac1ff 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.7qu 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.7qu 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.7qu 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2264859 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2264859 ']' 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.928 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yh7 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.lNN ]] 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lNN 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.YAJ 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.fhh ]] 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fhh 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.wwy 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Vah ]] 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vah 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.quf 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.4SO ]] 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.4SO 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.7qu 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.187 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.446 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.446 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:14.446 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:14.446 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:14.446 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:14.446 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.446 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.446 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.446 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.446 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.446 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.446 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.447 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.447 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.447 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:14.447 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:14.447 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:14.447 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:14.447 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:14.447 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:14.447 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:29:14.447 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:14.447 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:14.447 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:14.447 21:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:18.649 Waiting for block devices as requested 00:29:18.649 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:18.649 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:18.649 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:18.649 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:18.649 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:18.649 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:18.649 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:18.649 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:18.649 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:18.909 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:18.909 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:19.170 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:19.170 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:19.170 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:19.170 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:19.430 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:19.430 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:20.374 No valid GPT data, bailing 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:29:20.374 00:29:20.374 Discovery Log Number of Records 2, Generation counter 2 00:29:20.374 =====Discovery Log Entry 0====== 00:29:20.374 trtype: tcp 00:29:20.374 adrfam: ipv4 00:29:20.374 subtype: current discovery subsystem 00:29:20.374 treq: not specified, sq flow control disable supported 00:29:20.374 portid: 1 00:29:20.374 trsvcid: 4420 00:29:20.374 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:20.374 traddr: 10.0.0.1 00:29:20.374 eflags: none 00:29:20.374 sectype: none 00:29:20.374 =====Discovery Log Entry 1====== 00:29:20.374 trtype: tcp 00:29:20.374 adrfam: ipv4 00:29:20.374 subtype: nvme subsystem 00:29:20.374 treq: not specified, sq flow control disable supported 00:29:20.374 portid: 1 00:29:20.374 trsvcid: 4420 00:29:20.374 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:20.374 traddr: 10.0.0.1 00:29:20.374 eflags: none 00:29:20.374 sectype: none 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:20.374 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.375 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.375 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:20.375 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:20.375 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:20.375 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:20.375 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.375 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:20.375 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:20.375 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:20.375 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:20.375 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:20.375 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:20.375 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:20.375 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:20.637 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:20.637 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.637 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:20.637 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:20.637 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.638 nvme0n1 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.638 21:22:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: ]] 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.638 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.900 nvme0n1 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.901 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.164 nvme0n1 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.164 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.425 nvme0n1 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: ]] 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.425 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.686 nvme0n1 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.686 21:22:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.686 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.686 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.686 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.686 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.686 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.686 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.686 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.686 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.686 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.686 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.686 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.686 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.686 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:21.686 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.686 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.947 nvme0n1 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: ]] 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.947 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.948 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:21.948 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.948 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.210 nvme0n1 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.210 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.472 nvme0n1 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.472 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.733 nvme0n1 00:29:22.733 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.733 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.733 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.733 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.733 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.733 21:22:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: ]] 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.733 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.994 nvme0n1 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.994 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.255 nvme0n1 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: ]] 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.255 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.516 nvme0n1 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:23.516 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.517 21:22:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.777 nvme0n1 00:29:23.777 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.777 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.777 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.777 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.777 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.037 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.038 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.298 nvme0n1 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: ]] 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.298 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.557 nvme0n1 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.557 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:24.558 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:24.558 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:24.558 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:24.558 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.558 21:22:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.817 nvme0n1 00:29:24.817 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.817 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.817 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.817 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.076 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: ]] 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.077 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.646 nvme0n1 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.646 21:22:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.905 nvme0n1 00:29:25.905 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.905 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.905 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.905 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.905 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.165 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.734 nvme0n1 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: ]] 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.734 21:22:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.994 nvme0n1 00:29:26.994 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.994 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.994 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.994 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.994 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.994 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.994 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.994 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.994 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.994 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.254 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.514 nvme0n1 00:29:27.514 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.514 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.514 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.514 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.514 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.514 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: ]] 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.774 21:22:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.343 nvme0n1 00:29:28.343 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.343 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:28.343 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:28.343 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.343 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.343 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.603 21:22:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.174 nvme0n1 00:29:29.174 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.174 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.174 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:29.174 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.174 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.174 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.434 21:22:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.020 nvme0n1 00:29:30.020 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.020 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.020 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.020 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.020 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.020 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.020 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.020 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.020 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.020 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: ]] 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.279 21:22:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.848 nvme0n1 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:30.848 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:31.107 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:31.107 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.107 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.107 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:31.107 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.107 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:31.107 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:31.107 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:31.107 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:31.107 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.107 21:22:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.675 nvme0n1 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: ]] 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.675 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.934 nvme0n1 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:31.934 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.935 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.194 nvme0n1 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.194 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.454 nvme0n1 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: ]] 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.454 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.714 nvme0n1 00:29:32.714 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.714 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.714 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.714 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.714 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.714 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.714 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.714 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.714 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.715 21:22:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.715 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.975 nvme0n1 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: ]] 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.975 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.235 nvme0n1 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.235 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.496 nvme0n1 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.496 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.756 nvme0n1 00:29:33.756 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.756 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:33.756 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:33.756 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.756 21:22:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: ]] 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.756 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.016 nvme0n1 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.016 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.277 nvme0n1 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: ]] 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.277 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.538 nvme0n1 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:34.538 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:34.539 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:34.539 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:34.539 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:34.539 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:34.539 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.539 21:22:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.110 nvme0n1 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.110 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.371 nvme0n1 00:29:35.371 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: ]] 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.372 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.634 nvme0n1 00:29:35.634 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.634 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.634 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.634 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.634 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.634 21:22:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.634 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.896 nvme0n1 00:29:35.896 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.896 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:35.896 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:35.896 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.896 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.896 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.156 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.156 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.156 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.156 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.156 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.156 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:36.156 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.156 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:36.156 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: ]] 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.157 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.726 nvme0n1 00:29:36.726 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.726 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.727 21:22:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.007 nvme0n1 00:29:37.007 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.007 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.007 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.007 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.007 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.007 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.007 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.007 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.007 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.007 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.286 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.567 nvme0n1 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:37.567 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:37.568 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:37.568 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: ]] 00:29:37.568 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:37.568 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:37.568 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:37.568 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:37.568 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:37.568 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:37.568 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:37.568 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:37.568 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.568 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.568 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.828 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:37.828 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:37.828 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:37.828 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:37.828 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:37.828 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:37.828 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:37.828 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:37.828 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:37.828 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:37.828 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:37.828 21:22:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:37.828 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.828 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.088 nvme0n1 00:29:38.088 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.088 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.088 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.088 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.088 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.088 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.088 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.088 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.088 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.088 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.348 21:22:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.608 nvme0n1 00:29:38.608 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.608 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:38.608 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:38.608 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.608 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.608 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: ]] 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.868 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.438 nvme0n1 00:29:39.438 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.438 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:39.438 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:39.438 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.438 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.438 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.699 21:22:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.270 nvme0n1 00:29:40.270 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.270 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:40.270 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:40.271 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.271 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.271 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.531 21:22:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.101 nvme0n1 00:29:41.101 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.101 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.101 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.102 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.102 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.102 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: ]] 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:41.363 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.364 21:22:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.934 nvme0n1 00:29:41.934 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.934 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:41.934 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:41.934 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.934 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.934 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.195 21:22:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.766 nvme0n1 00:29:42.766 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.766 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:42.766 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:42.766 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.766 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.766 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: ]] 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.026 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.027 nvme0n1 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.027 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.287 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.288 nvme0n1 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.288 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.549 nvme0n1 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: ]] 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.549 21:22:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.811 nvme0n1 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.811 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.073 nvme0n1 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: ]] 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.073 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.333 nvme0n1 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.333 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.334 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.334 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.334 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:44.334 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.334 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.594 nvme0n1 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.594 21:22:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.856 nvme0n1 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: ]] 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.856 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.117 nvme0n1 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:45.117 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.118 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.379 nvme0n1 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: ]] 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.379 21:22:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.639 nvme0n1 00:29:45.639 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.639 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.639 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.639 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.639 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.639 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.900 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.161 nvme0n1 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.161 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:46.162 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:46.162 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:46.162 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.162 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.162 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:46.162 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.162 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:46.162 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:46.162 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:46.162 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:46.162 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.162 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.422 nvme0n1 00:29:46.422 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.422 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.422 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.422 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.422 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.422 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.422 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: ]] 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.423 21:22:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.684 nvme0n1 00:29:46.684 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.684 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.684 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.684 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.684 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.947 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.209 nvme0n1 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: ]] 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.210 21:22:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.784 nvme0n1 00:29:47.784 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.784 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.784 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.784 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.784 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.784 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.784 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.784 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.784 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.785 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.359 nvme0n1 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.359 21:22:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.933 nvme0n1 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: ]] 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.933 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.505 nvme0n1 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.505 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.506 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:49.506 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:49.506 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:49.506 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.506 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.506 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:49.506 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.506 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:49.506 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:49.506 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:49.506 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:49.506 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.506 21:22:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.767 nvme0n1 00:29:49.767 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.767 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.767 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.767 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.767 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.767 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkNWQxN2M0NDhmODQxZjc2ZDIyZjk1MGVkYjc5ZDipkXEp: 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: ]] 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mzc4NmY1YmE5YTQ3YTNjOTU5MGIyYjk5NjMxZjM0YjY1NWNiZDlkMjE4MmQ5YWE2NDFkMjExOWZhNjYyZGE5NT6Zxww=: 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:50.028 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.029 21:22:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.603 nvme0n1 00:29:50.603 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.603 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.603 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.603 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.603 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.603 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.865 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.437 nvme0n1 00:29:51.437 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.437 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.437 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.437 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.437 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.437 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.700 21:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.274 nvme0n1 00:29:52.274 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.274 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.274 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.274 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.274 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.274 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjNiMmZlZjk3MDM0ZDViY2VmYjZlOTI4NzgwMWM5NmEzMTZlYzA1MzM2MjY1YzdlK/rhyg==: 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: ]] 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDg3YjliYWU4YjU5ZmQzMzJlMTdkZGE4OThlYzgzNjjUpgv5: 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.536 21:22:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.108 nvme0n1 00:29:53.108 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.108 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.108 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.108 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.108 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.108 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA1MmM3ZWQ2MjhhMmNhMzRlYzcxMGI3ZDcyYmVhNzY3Yjc5Mzk4OGQ3MmI2ZmY1NTUxNTFlYjY3NTZhYzFmZpZzSuU=: 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.369 21:22:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.941 nvme0n1 00:29:53.941 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.941 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.941 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.941 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.941 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.941 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.202 request: 00:29:54.202 { 00:29:54.202 "name": "nvme0", 00:29:54.202 "trtype": "tcp", 00:29:54.202 "traddr": "10.0.0.1", 00:29:54.202 "adrfam": "ipv4", 00:29:54.202 "trsvcid": "4420", 00:29:54.202 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:54.202 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:54.202 "prchk_reftag": false, 00:29:54.202 "prchk_guard": false, 00:29:54.202 "hdgst": false, 00:29:54.202 "ddgst": false, 00:29:54.202 "allow_unrecognized_csi": false, 00:29:54.202 "method": "bdev_nvme_attach_controller", 00:29:54.202 "req_id": 1 00:29:54.202 } 00:29:54.202 Got JSON-RPC error response 00:29:54.202 response: 00:29:54.202 { 00:29:54.202 "code": -5, 00:29:54.202 "message": "Input/output error" 00:29:54.202 } 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:54.202 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.203 request: 00:29:54.203 { 00:29:54.203 "name": "nvme0", 00:29:54.203 "trtype": "tcp", 00:29:54.203 "traddr": "10.0.0.1", 00:29:54.203 "adrfam": "ipv4", 00:29:54.203 "trsvcid": "4420", 00:29:54.203 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:54.203 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:54.203 "prchk_reftag": false, 00:29:54.203 "prchk_guard": false, 00:29:54.203 "hdgst": false, 00:29:54.203 "ddgst": false, 00:29:54.203 "dhchap_key": "key2", 00:29:54.203 "allow_unrecognized_csi": false, 00:29:54.203 "method": "bdev_nvme_attach_controller", 00:29:54.203 "req_id": 1 00:29:54.203 } 00:29:54.203 Got JSON-RPC error response 00:29:54.203 response: 00:29:54.203 { 00:29:54.203 "code": -5, 00:29:54.203 "message": "Input/output error" 00:29:54.203 } 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.203 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.462 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:54.462 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:54.462 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.462 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.462 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.462 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.462 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.462 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.462 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.462 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.462 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.462 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.462 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:54.462 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.463 request: 00:29:54.463 { 00:29:54.463 "name": "nvme0", 00:29:54.463 "trtype": "tcp", 00:29:54.463 "traddr": "10.0.0.1", 00:29:54.463 "adrfam": "ipv4", 00:29:54.463 "trsvcid": "4420", 00:29:54.463 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:54.463 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:54.463 "prchk_reftag": false, 00:29:54.463 "prchk_guard": false, 00:29:54.463 "hdgst": false, 00:29:54.463 "ddgst": false, 00:29:54.463 "dhchap_key": "key1", 00:29:54.463 "dhchap_ctrlr_key": "ckey2", 00:29:54.463 "allow_unrecognized_csi": false, 00:29:54.463 "method": "bdev_nvme_attach_controller", 00:29:54.463 "req_id": 1 00:29:54.463 } 00:29:54.463 Got JSON-RPC error response 00:29:54.463 response: 00:29:54.463 { 00:29:54.463 "code": -5, 00:29:54.463 "message": "Input/output error" 00:29:54.463 } 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.463 nvme0n1 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.463 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.722 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.722 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.722 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:54.722 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.722 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.723 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.723 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.723 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:54.723 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:54.723 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:54.723 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:54.723 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:54.723 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:54.723 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:54.723 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:54.723 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.723 21:22:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.723 request: 00:29:54.723 { 00:29:54.723 "name": "nvme0", 00:29:54.723 "dhchap_key": "key1", 00:29:54.723 "dhchap_ctrlr_key": "ckey2", 00:29:54.723 "method": "bdev_nvme_set_keys", 00:29:54.723 "req_id": 1 00:29:54.723 } 00:29:54.723 Got JSON-RPC error response 00:29:54.723 response: 00:29:54.723 { 00:29:54.723 "code": -13, 00:29:54.723 "message": "Permission denied" 00:29:54.723 } 00:29:54.723 21:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:54.723 21:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:54.723 21:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:54.723 21:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:54.723 21:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:54.723 21:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.723 21:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:54.723 21:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.723 21:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.723 21:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.723 21:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:54.723 21:22:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:56.102 21:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.102 21:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:56.102 21:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.102 21:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.102 21:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.102 21:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:56.102 21:22:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTJkZWFjMTdjYmJlY2M2NTkxNDI5OTY2ZWE0MjVjYjI2MWYyM2ZlZWYxYjc2OTVkHmxLEQ==: 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: ]] 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQ4NDRhNzQ4NWU4ZjMxODI1ZjlkZWViNzA2N2VhOTZjZTJlZTNjZDdiNDE5MzE4PHlxtw==: 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.039 nvme0n1 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWIyZGY5NTNlOTJiYzBlM2VjOTFjODk1Njc2NTg2N2U0bonb: 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: ]] 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODNlYzI1ZmYxOTY0MDYzODZkNTdmZWQyNjFmOTFhZDfJ92FW: 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:57.039 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:57.040 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:57.040 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.040 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.040 request: 00:29:57.040 { 00:29:57.040 "name": "nvme0", 00:29:57.040 "dhchap_key": "key2", 00:29:57.040 "dhchap_ctrlr_key": "ckey1", 00:29:57.040 "method": "bdev_nvme_set_keys", 00:29:57.040 "req_id": 1 00:29:57.040 } 00:29:57.040 Got JSON-RPC error response 00:29:57.040 response: 00:29:57.040 { 00:29:57.040 "code": -13, 00:29:57.040 "message": "Permission denied" 00:29:57.040 } 00:29:57.040 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:57.040 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:57.040 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:57.040 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:57.040 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:57.040 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.040 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:57.040 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.040 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.040 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.299 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:57.299 21:22:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:58.237 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.237 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:58.237 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.237 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.237 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.237 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:58.237 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:58.237 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:58.237 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:58.237 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:58.237 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:58.237 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:58.237 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:58.237 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:58.237 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:58.237 rmmod nvme_tcp 00:29:58.238 rmmod nvme_fabrics 00:29:58.238 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:58.238 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:58.238 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:58.238 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2264859 ']' 00:29:58.238 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2264859 00:29:58.238 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2264859 ']' 00:29:58.238 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2264859 00:29:58.238 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:58.238 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:58.238 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2264859 00:29:58.238 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:58.238 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:58.238 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2264859' 00:29:58.238 killing process with pid 2264859 00:29:58.238 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2264859 00:29:58.238 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2264859 00:29:58.497 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:58.497 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:58.497 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:58.497 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:58.497 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:29:58.497 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:58.497 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:58.497 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:58.497 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:58.497 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.497 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.497 21:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.036 21:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:01.036 21:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:01.036 21:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:01.036 21:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:30:01.036 21:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:30:01.036 21:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:30:01.036 21:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:01.036 21:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:01.036 21:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:01.036 21:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:01.036 21:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:01.036 21:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:01.036 21:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:04.387 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:04.387 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:04.387 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:04.387 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:04.387 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:04.648 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:04.648 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:04.648 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:04.648 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:04.648 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:04.648 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:04.648 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:04.648 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:04.648 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:04.648 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:04.648 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:04.648 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:30:05.220 21:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.yh7 /tmp/spdk.key-null.YAJ /tmp/spdk.key-sha256.wwy /tmp/spdk.key-sha384.quf /tmp/spdk.key-sha512.7qu /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:30:05.220 21:23:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:09.430 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:09.430 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:09.430 00:30:09.430 real 1m5.196s 00:30:09.430 user 0m57.959s 00:30:09.431 sys 0m17.299s 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.431 ************************************ 00:30:09.431 END TEST nvmf_auth_host 00:30:09.431 ************************************ 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.431 ************************************ 00:30:09.431 START TEST nvmf_digest 00:30:09.431 ************************************ 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:09.431 * Looking for test storage... 00:30:09.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:09.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.431 --rc genhtml_branch_coverage=1 00:30:09.431 --rc genhtml_function_coverage=1 00:30:09.431 --rc genhtml_legend=1 00:30:09.431 --rc geninfo_all_blocks=1 00:30:09.431 --rc geninfo_unexecuted_blocks=1 00:30:09.431 00:30:09.431 ' 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:09.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.431 --rc genhtml_branch_coverage=1 00:30:09.431 --rc genhtml_function_coverage=1 00:30:09.431 --rc genhtml_legend=1 00:30:09.431 --rc geninfo_all_blocks=1 00:30:09.431 --rc geninfo_unexecuted_blocks=1 00:30:09.431 00:30:09.431 ' 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:09.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.431 --rc genhtml_branch_coverage=1 00:30:09.431 --rc genhtml_function_coverage=1 00:30:09.431 --rc genhtml_legend=1 00:30:09.431 --rc geninfo_all_blocks=1 00:30:09.431 --rc geninfo_unexecuted_blocks=1 00:30:09.431 00:30:09.431 ' 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:09.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.431 --rc genhtml_branch_coverage=1 00:30:09.431 --rc genhtml_function_coverage=1 00:30:09.431 --rc genhtml_legend=1 00:30:09.431 --rc geninfo_all_blocks=1 00:30:09.431 --rc geninfo_unexecuted_blocks=1 00:30:09.431 00:30:09.431 ' 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:30:09.431 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:09.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:30:09.432 21:23:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.577 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:17.578 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:17.578 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:17.578 Found net devices under 0000:31:00.0: cvl_0_0 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:17.578 Found net devices under 0000:31:00.1: cvl_0_1 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.578 21:23:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.839 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.839 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:17.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:30:17.840 00:30:17.840 --- 10.0.0.2 ping statistics --- 00:30:17.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.840 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:30:17.840 00:30:17.840 --- 10.0.0.1 ping statistics --- 00:30:17.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.840 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:17.840 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:18.101 ************************************ 00:30:18.101 START TEST nvmf_digest_clean 00:30:18.101 ************************************ 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2283561 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2283561 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2283561 ']' 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.101 21:23:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:18.101 [2024-12-05 21:23:19.366545] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:30:18.101 [2024-12-05 21:23:19.366607] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.101 [2024-12-05 21:23:19.456583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.101 [2024-12-05 21:23:19.496827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.101 [2024-12-05 21:23:19.496868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.101 [2024-12-05 21:23:19.496876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.101 [2024-12-05 21:23:19.496883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.101 [2024-12-05 21:23:19.496889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.101 [2024-12-05 21:23:19.497490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:19.044 null0 00:30:19.044 [2024-12-05 21:23:20.291342] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.044 [2024-12-05 21:23:20.315549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2283696 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2283696 /var/tmp/bperf.sock 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2283696 ']' 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:19.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:19.044 21:23:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:19.044 [2024-12-05 21:23:20.373709] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:30:19.044 [2024-12-05 21:23:20.373758] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283696 ] 00:30:19.044 [2024-12-05 21:23:20.469610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.306 [2024-12-05 21:23:20.505779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.877 21:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:19.877 21:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:19.877 21:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:19.877 21:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:19.877 21:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:20.140 21:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:20.140 21:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:20.400 nvme0n1 00:30:20.400 21:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:20.400 21:23:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:20.400 Running I/O for 2 seconds... 00:30:22.726 18813.00 IOPS, 73.49 MiB/s [2024-12-05T20:23:24.163Z] 18971.50 IOPS, 74.11 MiB/s 00:30:22.726 Latency(us) 00:30:22.726 [2024-12-05T20:23:24.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.726 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:22.726 nvme0n1 : 2.01 18979.12 74.14 0.00 0.00 6736.73 3017.39 15291.73 00:30:22.726 [2024-12-05T20:23:24.163Z] =================================================================================================================== 00:30:22.726 [2024-12-05T20:23:24.163Z] Total : 18979.12 74.14 0.00 0.00 6736.73 3017.39 15291.73 00:30:22.726 { 00:30:22.726 "results": [ 00:30:22.726 { 00:30:22.726 "job": "nvme0n1", 00:30:22.726 "core_mask": "0x2", 00:30:22.726 "workload": "randread", 00:30:22.726 "status": "finished", 00:30:22.726 "queue_depth": 128, 00:30:22.726 "io_size": 4096, 00:30:22.726 "runtime": 2.005941, 00:30:22.726 "iops": 18979.122516564545, 00:30:22.726 "mibps": 74.13719733033025, 00:30:22.726 "io_failed": 0, 00:30:22.726 "io_timeout": 0, 00:30:22.726 "avg_latency_us": 6736.727381996796, 00:30:22.726 "min_latency_us": 3017.3866666666668, 00:30:22.726 "max_latency_us": 15291.733333333334 00:30:22.726 } 00:30:22.726 ], 00:30:22.726 "core_count": 1 00:30:22.726 } 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:22.726 | select(.opcode=="crc32c") 00:30:22.726 | "\(.module_name) \(.executed)"' 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2283696 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2283696 ']' 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2283696 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2283696 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2283696' 00:30:22.726 killing process with pid 2283696 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2283696 00:30:22.726 Received shutdown signal, test time was about 2.000000 seconds 00:30:22.726 00:30:22.726 Latency(us) 00:30:22.726 [2024-12-05T20:23:24.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.726 [2024-12-05T20:23:24.163Z] =================================================================================================================== 00:30:22.726 [2024-12-05T20:23:24.163Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:22.726 21:23:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2283696 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2284482 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2284482 /var/tmp/bperf.sock 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2284482 ']' 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:22.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.726 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:22.726 [2024-12-05 21:23:24.150067] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:30:22.726 [2024-12-05 21:23:24.150126] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2284482 ] 00:30:22.726 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:22.726 Zero copy mechanism will not be used. 00:30:22.987 [2024-12-05 21:23:24.238653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.987 [2024-12-05 21:23:24.268108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.558 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:23.558 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:23.558 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:23.558 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:23.558 21:23:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:23.825 21:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:23.825 21:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:24.393 nvme0n1 00:30:24.394 21:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:24.394 21:23:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:24.394 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:24.394 Zero copy mechanism will not be used. 00:30:24.394 Running I/O for 2 seconds... 00:30:26.280 3159.00 IOPS, 394.88 MiB/s [2024-12-05T20:23:27.717Z] 3144.50 IOPS, 393.06 MiB/s 00:30:26.280 Latency(us) 00:30:26.280 [2024-12-05T20:23:27.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.280 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:26.280 nvme0n1 : 2.01 3144.44 393.06 0.00 0.00 5084.26 1119.57 14199.47 00:30:26.280 [2024-12-05T20:23:27.717Z] =================================================================================================================== 00:30:26.280 [2024-12-05T20:23:27.717Z] Total : 3144.44 393.06 0.00 0.00 5084.26 1119.57 14199.47 00:30:26.280 { 00:30:26.280 "results": [ 00:30:26.280 { 00:30:26.280 "job": "nvme0n1", 00:30:26.280 "core_mask": "0x2", 00:30:26.280 "workload": "randread", 00:30:26.280 "status": "finished", 00:30:26.280 "queue_depth": 16, 00:30:26.280 "io_size": 131072, 00:30:26.280 "runtime": 2.005126, 00:30:26.280 "iops": 3144.440798234126, 00:30:26.280 "mibps": 393.05509977926573, 00:30:26.280 "io_failed": 0, 00:30:26.280 "io_timeout": 0, 00:30:26.280 "avg_latency_us": 5084.256634417129, 00:30:26.280 "min_latency_us": 1119.5733333333333, 00:30:26.280 "max_latency_us": 14199.466666666667 00:30:26.280 } 00:30:26.280 ], 00:30:26.280 "core_count": 1 00:30:26.280 } 00:30:26.280 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:26.630 | select(.opcode=="crc32c") 00:30:26.630 | "\(.module_name) \(.executed)"' 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2284482 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2284482 ']' 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2284482 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2284482 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2284482' 00:30:26.630 killing process with pid 2284482 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2284482 00:30:26.630 Received shutdown signal, test time was about 2.000000 seconds 00:30:26.630 00:30:26.630 Latency(us) 00:30:26.630 [2024-12-05T20:23:28.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.630 [2024-12-05T20:23:28.067Z] =================================================================================================================== 00:30:26.630 [2024-12-05T20:23:28.067Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:26.630 21:23:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2284482 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2285276 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2285276 /var/tmp/bperf.sock 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2285276 ']' 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:26.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.942 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:26.942 [2024-12-05 21:23:28.107597] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:30:26.942 [2024-12-05 21:23:28.107651] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2285276 ] 00:30:26.942 [2024-12-05 21:23:28.197386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.942 [2024-12-05 21:23:28.226390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.514 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:27.514 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:27.514 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:27.514 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:27.514 21:23:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:27.776 21:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:27.776 21:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:28.347 nvme0n1 00:30:28.347 21:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:28.347 21:23:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:28.347 Running I/O for 2 seconds... 00:30:30.232 21752.00 IOPS, 84.97 MiB/s [2024-12-05T20:23:31.669Z] 21792.50 IOPS, 85.13 MiB/s 00:30:30.232 Latency(us) 00:30:30.232 [2024-12-05T20:23:31.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.232 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:30.232 nvme0n1 : 2.00 21803.17 85.17 0.00 0.00 5862.80 2061.65 10212.69 00:30:30.232 [2024-12-05T20:23:31.669Z] =================================================================================================================== 00:30:30.232 [2024-12-05T20:23:31.669Z] Total : 21803.17 85.17 0.00 0.00 5862.80 2061.65 10212.69 00:30:30.232 { 00:30:30.232 "results": [ 00:30:30.232 { 00:30:30.232 "job": "nvme0n1", 00:30:30.232 "core_mask": "0x2", 00:30:30.232 "workload": "randwrite", 00:30:30.232 "status": "finished", 00:30:30.232 "queue_depth": 128, 00:30:30.232 "io_size": 4096, 00:30:30.232 "runtime": 2.004892, 00:30:30.232 "iops": 21803.169447531338, 00:30:30.232 "mibps": 85.16863065441929, 00:30:30.232 "io_failed": 0, 00:30:30.232 "io_timeout": 0, 00:30:30.232 "avg_latency_us": 5862.800459359916, 00:30:30.232 "min_latency_us": 2061.653333333333, 00:30:30.232 "max_latency_us": 10212.693333333333 00:30:30.232 } 00:30:30.232 ], 00:30:30.232 "core_count": 1 00:30:30.232 } 00:30:30.232 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:30.232 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:30.232 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:30.232 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:30.232 | select(.opcode=="crc32c") 00:30:30.232 | "\(.module_name) \(.executed)"' 00:30:30.232 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:30.492 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:30.492 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:30.492 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:30.492 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:30.492 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2285276 00:30:30.492 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2285276 ']' 00:30:30.492 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2285276 00:30:30.492 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:30.492 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:30.492 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2285276 00:30:30.492 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:30.493 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:30.493 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2285276' 00:30:30.493 killing process with pid 2285276 00:30:30.493 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2285276 00:30:30.493 Received shutdown signal, test time was about 2.000000 seconds 00:30:30.493 00:30:30.493 Latency(us) 00:30:30.493 [2024-12-05T20:23:31.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.493 [2024-12-05T20:23:31.930Z] =================================================================================================================== 00:30:30.493 [2024-12-05T20:23:31.930Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:30.493 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2285276 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2285967 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2285967 /var/tmp/bperf.sock 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2285967 ']' 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:30.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:30.754 21:23:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:30.754 [2024-12-05 21:23:32.014089] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:30:30.754 [2024-12-05 21:23:32.014161] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2285967 ] 00:30:30.754 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:30.754 Zero copy mechanism will not be used. 00:30:30.754 [2024-12-05 21:23:32.102423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.754 [2024-12-05 21:23:32.131944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.695 21:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:31.695 21:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:31.695 21:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:31.695 21:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:31.695 21:23:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:31.695 21:23:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:31.695 21:23:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:31.988 nvme0n1 00:30:31.988 21:23:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:31.988 21:23:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:32.249 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:32.249 Zero copy mechanism will not be used. 00:30:32.249 Running I/O for 2 seconds... 00:30:34.128 4626.00 IOPS, 578.25 MiB/s [2024-12-05T20:23:35.565Z] 4962.50 IOPS, 620.31 MiB/s 00:30:34.128 Latency(us) 00:30:34.128 [2024-12-05T20:23:35.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.129 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:34.129 nvme0n1 : 2.01 4956.56 619.57 0.00 0.00 3221.44 1542.83 6717.44 00:30:34.129 [2024-12-05T20:23:35.566Z] =================================================================================================================== 00:30:34.129 [2024-12-05T20:23:35.566Z] Total : 4956.56 619.57 0.00 0.00 3221.44 1542.83 6717.44 00:30:34.129 { 00:30:34.129 "results": [ 00:30:34.129 { 00:30:34.129 "job": "nvme0n1", 00:30:34.129 "core_mask": "0x2", 00:30:34.129 "workload": "randwrite", 00:30:34.129 "status": "finished", 00:30:34.129 "queue_depth": 16, 00:30:34.129 "io_size": 131072, 00:30:34.129 "runtime": 2.00643, 00:30:34.129 "iops": 4956.5646446673945, 00:30:34.129 "mibps": 619.5705805834243, 00:30:34.129 "io_failed": 0, 00:30:34.129 "io_timeout": 0, 00:30:34.129 "avg_latency_us": 3221.437758337523, 00:30:34.129 "min_latency_us": 1542.8266666666666, 00:30:34.129 "max_latency_us": 6717.44 00:30:34.129 } 00:30:34.129 ], 00:30:34.129 "core_count": 1 00:30:34.129 } 00:30:34.129 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:34.129 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:34.129 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:34.129 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:34.129 | select(.opcode=="crc32c") 00:30:34.129 | "\(.module_name) \(.executed)"' 00:30:34.129 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:34.388 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:34.388 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:34.388 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:34.388 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:34.388 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2285967 00:30:34.388 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2285967 ']' 00:30:34.388 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2285967 00:30:34.388 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:34.389 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:34.389 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2285967 00:30:34.389 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:34.389 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:34.389 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2285967' 00:30:34.389 killing process with pid 2285967 00:30:34.389 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2285967 00:30:34.389 Received shutdown signal, test time was about 2.000000 seconds 00:30:34.389 00:30:34.389 Latency(us) 00:30:34.389 [2024-12-05T20:23:35.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.389 [2024-12-05T20:23:35.826Z] =================================================================================================================== 00:30:34.389 [2024-12-05T20:23:35.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:34.389 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2285967 00:30:34.649 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2283561 00:30:34.649 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2283561 ']' 00:30:34.649 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2283561 00:30:34.649 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:34.649 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:34.649 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2283561 00:30:34.649 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:34.649 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:34.649 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2283561' 00:30:34.649 killing process with pid 2283561 00:30:34.649 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2283561 00:30:34.649 21:23:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2283561 00:30:34.649 00:30:34.649 real 0m16.770s 00:30:34.649 user 0m33.196s 00:30:34.649 sys 0m3.561s 00:30:34.649 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:34.649 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:34.649 ************************************ 00:30:34.649 END TEST nvmf_digest_clean 00:30:34.649 ************************************ 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:34.911 ************************************ 00:30:34.911 START TEST nvmf_digest_error 00:30:34.911 ************************************ 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2286779 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2286779 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2286779 ']' 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:34.911 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:34.911 [2024-12-05 21:23:36.193698] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:30:34.911 [2024-12-05 21:23:36.193745] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.911 [2024-12-05 21:23:36.275728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.911 [2024-12-05 21:23:36.310007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.911 [2024-12-05 21:23:36.310034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.911 [2024-12-05 21:23:36.310042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.911 [2024-12-05 21:23:36.310049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.911 [2024-12-05 21:23:36.310054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.911 [2024-12-05 21:23:36.310614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:35.172 [2024-12-05 21:23:36.395092] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:35.172 null0 00:30:35.172 [2024-12-05 21:23:36.478788] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.172 [2024-12-05 21:23:36.503000] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2286964 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2286964 /var/tmp/bperf.sock 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2286964 ']' 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:35.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.172 21:23:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:35.172 [2024-12-05 21:23:36.559758] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:30:35.172 [2024-12-05 21:23:36.559806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286964 ] 00:30:35.434 [2024-12-05 21:23:36.649841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.434 [2024-12-05 21:23:36.679957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.005 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.005 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:36.005 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:36.005 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:36.266 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:36.266 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.266 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:36.266 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.266 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:36.266 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:36.528 nvme0n1 00:30:36.528 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:36.528 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.528 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:36.528 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.528 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:36.528 21:23:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:36.528 Running I/O for 2 seconds... 00:30:36.528 [2024-12-05 21:23:37.869679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.528 [2024-12-05 21:23:37.869713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.528 [2024-12-05 21:23:37.869722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.528 [2024-12-05 21:23:37.883820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.528 [2024-12-05 21:23:37.883840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.528 [2024-12-05 21:23:37.883848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.528 [2024-12-05 21:23:37.895487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.528 [2024-12-05 21:23:37.895506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.528 [2024-12-05 21:23:37.895513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.528 [2024-12-05 21:23:37.909827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.528 [2024-12-05 21:23:37.909845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.528 [2024-12-05 21:23:37.909852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.528 [2024-12-05 21:23:37.920935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.528 [2024-12-05 21:23:37.920954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.528 [2024-12-05 21:23:37.920961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.528 [2024-12-05 21:23:37.935466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.528 [2024-12-05 21:23:37.935484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.528 [2024-12-05 21:23:37.935491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.528 [2024-12-05 21:23:37.947513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.528 [2024-12-05 21:23:37.947531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.528 [2024-12-05 21:23:37.947538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.528 [2024-12-05 21:23:37.957912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.528 [2024-12-05 21:23:37.957929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.528 [2024-12-05 21:23:37.957936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.789 [2024-12-05 21:23:37.972160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.789 [2024-12-05 21:23:37.972179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.789 [2024-12-05 21:23:37.972187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:37.984972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:37.984991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:37.984998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:37.996523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:37.996540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:37.996547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.008417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.008435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.008442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.020388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.020406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.020413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.034011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.034029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.034039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.046195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.046211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.046218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.059333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.059351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.059358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.071246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.071263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.071270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.084188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.084206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.084213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.094609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.094627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.094633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.108822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.108840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.108847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.122030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.122048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.122055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.133199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.133216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.133222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.145289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.145310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.145317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.158579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.158597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.158604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.171404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.171421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.171428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.184016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.184033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.184040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.197301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.197319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.197325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.208947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.208965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.208971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:36.790 [2024-12-05 21:23:38.222588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:36.790 [2024-12-05 21:23:38.222606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:36.790 [2024-12-05 21:23:38.222612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.233674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.233692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.233699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.246459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.246477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.246484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.260245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.260264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.260270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.269820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.269837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.269843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.283713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.283730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.283737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.295687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.295705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.295711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.309029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.309047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.309054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.322442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.322459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.322465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.332987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.333005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.333011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.346068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.346093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.346100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.360375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.360393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.360402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.372251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.372269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.372275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.383413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.383430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.383437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.397935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.397952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.397958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.410132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.410149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.410156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.420839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.420856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.420867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.435942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.053 [2024-12-05 21:23:38.435959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.053 [2024-12-05 21:23:38.435966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.053 [2024-12-05 21:23:38.448604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.054 [2024-12-05 21:23:38.448622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.054 [2024-12-05 21:23:38.448629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.054 [2024-12-05 21:23:38.460075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.054 [2024-12-05 21:23:38.460092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.054 [2024-12-05 21:23:38.460098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.054 [2024-12-05 21:23:38.471832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.054 [2024-12-05 21:23:38.471851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.054 [2024-12-05 21:23:38.471858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.054 [2024-12-05 21:23:38.485458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.054 [2024-12-05 21:23:38.485476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.054 [2024-12-05 21:23:38.485482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.316 [2024-12-05 21:23:38.496925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.316 [2024-12-05 21:23:38.496942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.316 [2024-12-05 21:23:38.496949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.316 [2024-12-05 21:23:38.510408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.316 [2024-12-05 21:23:38.510426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.316 [2024-12-05 21:23:38.510432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.316 [2024-12-05 21:23:38.523566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.316 [2024-12-05 21:23:38.523583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.316 [2024-12-05 21:23:38.523590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.316 [2024-12-05 21:23:38.535128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.316 [2024-12-05 21:23:38.535146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.316 [2024-12-05 21:23:38.535152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.316 [2024-12-05 21:23:38.547723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.316 [2024-12-05 21:23:38.547740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.316 [2024-12-05 21:23:38.547747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.316 [2024-12-05 21:23:38.561168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.316 [2024-12-05 21:23:38.561185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.316 [2024-12-05 21:23:38.561192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.316 [2024-12-05 21:23:38.574084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.316 [2024-12-05 21:23:38.574101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.316 [2024-12-05 21:23:38.574112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.316 [2024-12-05 21:23:38.584941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.316 [2024-12-05 21:23:38.584958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.316 [2024-12-05 21:23:38.584965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.316 [2024-12-05 21:23:38.598234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.316 [2024-12-05 21:23:38.598251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.316 [2024-12-05 21:23:38.598258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.316 [2024-12-05 21:23:38.611909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.316 [2024-12-05 21:23:38.611926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.316 [2024-12-05 21:23:38.611933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.316 [2024-12-05 21:23:38.624745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.316 [2024-12-05 21:23:38.624763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.316 [2024-12-05 21:23:38.624769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.316 [2024-12-05 21:23:38.635561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.316 [2024-12-05 21:23:38.635578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.316 [2024-12-05 21:23:38.635585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.316 [2024-12-05 21:23:38.648014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.316 [2024-12-05 21:23:38.648034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.316 [2024-12-05 21:23:38.648041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.316 [2024-12-05 21:23:38.659675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.316 [2024-12-05 21:23:38.659692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.316 [2024-12-05 21:23:38.659699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.316 [2024-12-05 21:23:38.673937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.317 [2024-12-05 21:23:38.673954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.317 [2024-12-05 21:23:38.673961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.317 [2024-12-05 21:23:38.685282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.317 [2024-12-05 21:23:38.685302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.317 [2024-12-05 21:23:38.685309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.317 [2024-12-05 21:23:38.698916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.317 [2024-12-05 21:23:38.698933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.317 [2024-12-05 21:23:38.698939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.317 [2024-12-05 21:23:38.711308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.317 [2024-12-05 21:23:38.711325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.317 [2024-12-05 21:23:38.711331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.317 [2024-12-05 21:23:38.723675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.317 [2024-12-05 21:23:38.723693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.317 [2024-12-05 21:23:38.723699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.317 [2024-12-05 21:23:38.736426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.317 [2024-12-05 21:23:38.736443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.317 [2024-12-05 21:23:38.736449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.317 [2024-12-05 21:23:38.748526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.317 [2024-12-05 21:23:38.748543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.317 [2024-12-05 21:23:38.748549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.578 [2024-12-05 21:23:38.761335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.578 [2024-12-05 21:23:38.761353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.578 [2024-12-05 21:23:38.761359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.578 [2024-12-05 21:23:38.773397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.578 [2024-12-05 21:23:38.773413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.578 [2024-12-05 21:23:38.773419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.578 [2024-12-05 21:23:38.786499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.578 [2024-12-05 21:23:38.786516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.578 [2024-12-05 21:23:38.786523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.578 [2024-12-05 21:23:38.797362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.578 [2024-12-05 21:23:38.797379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.578 [2024-12-05 21:23:38.797386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.578 [2024-12-05 21:23:38.810780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:38.810797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:38.810803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.579 [2024-12-05 21:23:38.822618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:38.822634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:38.822641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.579 [2024-12-05 21:23:38.836745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:38.836762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:38.836769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.579 20145.00 IOPS, 78.69 MiB/s [2024-12-05T20:23:39.016Z] [2024-12-05 21:23:38.850848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:38.850866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:38.850873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.579 [2024-12-05 21:23:38.860839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:38.860855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:38.860866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.579 [2024-12-05 21:23:38.874981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:38.874998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:38.875004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.579 [2024-12-05 21:23:38.890853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:38.890875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:38.890882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.579 [2024-12-05 21:23:38.905045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:38.905061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:38.905071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.579 [2024-12-05 21:23:38.916152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:38.916168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:38.916174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.579 [2024-12-05 21:23:38.928413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:38.928429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:38.928436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.579 [2024-12-05 21:23:38.941510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:38.941526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:38.941533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.579 [2024-12-05 21:23:38.955186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:38.955203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:38.955209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.579 [2024-12-05 21:23:38.967631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:38.967647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:38.967653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.579 [2024-12-05 21:23:38.980907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:38.980924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:38.980930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.579 [2024-12-05 21:23:38.990515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:38.990532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:38.990538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.579 [2024-12-05 21:23:39.005328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.579 [2024-12-05 21:23:39.005346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.579 [2024-12-05 21:23:39.005352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.839 [2024-12-05 21:23:39.019212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.839 [2024-12-05 21:23:39.019233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.839 [2024-12-05 21:23:39.019240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.033696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.033713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.033719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.045499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.045515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.045522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.055997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.056013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.056020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.068436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.068453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.068460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.081876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.081893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.081901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.095005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.095022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.095028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.108235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.108252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.108258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.120758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.120775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.120781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.132263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.132279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.132286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.145375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.145392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.145398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.156321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.156338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.156344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.169629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.169646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.169652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.182340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.182357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.182363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.195461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.195477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.195484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.208485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.208502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.208508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.221309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.221325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.221332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.232162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.232181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.232188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.245611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.245627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.245633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.258666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.258682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.258688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.840 [2024-12-05 21:23:39.270459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:37.840 [2024-12-05 21:23:39.270475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.840 [2024-12-05 21:23:39.270481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.101 [2024-12-05 21:23:39.280975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.101 [2024-12-05 21:23:39.280992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.101 [2024-12-05 21:23:39.280998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.101 [2024-12-05 21:23:39.294777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.101 [2024-12-05 21:23:39.294794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.101 [2024-12-05 21:23:39.294800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.101 [2024-12-05 21:23:39.309284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.101 [2024-12-05 21:23:39.309301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.101 [2024-12-05 21:23:39.309307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.101 [2024-12-05 21:23:39.321262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.101 [2024-12-05 21:23:39.321278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.101 [2024-12-05 21:23:39.321284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.101 [2024-12-05 21:23:39.331680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.101 [2024-12-05 21:23:39.331697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.101 [2024-12-05 21:23:39.331703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.101 [2024-12-05 21:23:39.344733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.101 [2024-12-05 21:23:39.344750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.101 [2024-12-05 21:23:39.344756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.101 [2024-12-05 21:23:39.358325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.101 [2024-12-05 21:23:39.358342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.101 [2024-12-05 21:23:39.358348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.101 [2024-12-05 21:23:39.370893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.101 [2024-12-05 21:23:39.370910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.101 [2024-12-05 21:23:39.370917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.101 [2024-12-05 21:23:39.382961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.101 [2024-12-05 21:23:39.382977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.101 [2024-12-05 21:23:39.382984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.101 [2024-12-05 21:23:39.394849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.101 [2024-12-05 21:23:39.394872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.101 [2024-12-05 21:23:39.394879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.101 [2024-12-05 21:23:39.408123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.101 [2024-12-05 21:23:39.408140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.101 [2024-12-05 21:23:39.408146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.101 [2024-12-05 21:23:39.421134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.101 [2024-12-05 21:23:39.421151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.101 [2024-12-05 21:23:39.421158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.102 [2024-12-05 21:23:39.432664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.102 [2024-12-05 21:23:39.432681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.102 [2024-12-05 21:23:39.432687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.102 [2024-12-05 21:23:39.445416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.102 [2024-12-05 21:23:39.445434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.102 [2024-12-05 21:23:39.445443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.102 [2024-12-05 21:23:39.457528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.102 [2024-12-05 21:23:39.457545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.102 [2024-12-05 21:23:39.457551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.102 [2024-12-05 21:23:39.469796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.102 [2024-12-05 21:23:39.469814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.102 [2024-12-05 21:23:39.469820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.102 [2024-12-05 21:23:39.482956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.102 [2024-12-05 21:23:39.482973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.102 [2024-12-05 21:23:39.482979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.102 [2024-12-05 21:23:39.496897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.102 [2024-12-05 21:23:39.496914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.102 [2024-12-05 21:23:39.496920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.102 [2024-12-05 21:23:39.506457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.102 [2024-12-05 21:23:39.506474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.102 [2024-12-05 21:23:39.506481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.102 [2024-12-05 21:23:39.520174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.102 [2024-12-05 21:23:39.520192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.102 [2024-12-05 21:23:39.520198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.102 [2024-12-05 21:23:39.533620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.102 [2024-12-05 21:23:39.533638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.102 [2024-12-05 21:23:39.533644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.363 [2024-12-05 21:23:39.547484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.363 [2024-12-05 21:23:39.547502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.363 [2024-12-05 21:23:39.547508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.363 [2024-12-05 21:23:39.561515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.363 [2024-12-05 21:23:39.561538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.363 [2024-12-05 21:23:39.561545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.363 [2024-12-05 21:23:39.575053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.363 [2024-12-05 21:23:39.575071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.363 [2024-12-05 21:23:39.575077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.363 [2024-12-05 21:23:39.587206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.363 [2024-12-05 21:23:39.587222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.363 [2024-12-05 21:23:39.587229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.363 [2024-12-05 21:23:39.598748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.363 [2024-12-05 21:23:39.598766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.363 [2024-12-05 21:23:39.598772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.364 [2024-12-05 21:23:39.613054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.364 [2024-12-05 21:23:39.613071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.364 [2024-12-05 21:23:39.613077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.364 [2024-12-05 21:23:39.626100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.364 [2024-12-05 21:23:39.626117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.364 [2024-12-05 21:23:39.626123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.364 [2024-12-05 21:23:39.637594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.364 [2024-12-05 21:23:39.637611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.364 [2024-12-05 21:23:39.637617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.364 [2024-12-05 21:23:39.649434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.364 [2024-12-05 21:23:39.649451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.364 [2024-12-05 21:23:39.649458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.364 [2024-12-05 21:23:39.661416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.364 [2024-12-05 21:23:39.661433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.364 [2024-12-05 21:23:39.661440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.364 [2024-12-05 21:23:39.673586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.364 [2024-12-05 21:23:39.673603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.364 [2024-12-05 21:23:39.673610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.364 [2024-12-05 21:23:39.687135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.364 [2024-12-05 21:23:39.687153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.364 [2024-12-05 21:23:39.687160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.364 [2024-12-05 21:23:39.699050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.364 [2024-12-05 21:23:39.699067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.364 [2024-12-05 21:23:39.699074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.364 [2024-12-05 21:23:39.711767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.364 [2024-12-05 21:23:39.711784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.364 [2024-12-05 21:23:39.711790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.364 [2024-12-05 21:23:39.723731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.364 [2024-12-05 21:23:39.723750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.364 [2024-12-05 21:23:39.723756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.364 [2024-12-05 21:23:39.737351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.364 [2024-12-05 21:23:39.737368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.364 [2024-12-05 21:23:39.737374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.364 [2024-12-05 21:23:39.749177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.364 [2024-12-05 21:23:39.749194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.364 [2024-12-05 21:23:39.749201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.364 [2024-12-05 21:23:39.761619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.364 [2024-12-05 21:23:39.761636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.364 [2024-12-05 21:23:39.761642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.364 [2024-12-05 21:23:39.775325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.364 [2024-12-05 21:23:39.775341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.364 [2024-12-05 21:23:39.775351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.364 [2024-12-05 21:23:39.787333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.364 [2024-12-05 21:23:39.787350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.364 [2024-12-05 21:23:39.787356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.625 [2024-12-05 21:23:39.800884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.625 [2024-12-05 21:23:39.800902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.625 [2024-12-05 21:23:39.800908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.625 [2024-12-05 21:23:39.810684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.625 [2024-12-05 21:23:39.810701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.625 [2024-12-05 21:23:39.810708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.625 [2024-12-05 21:23:39.824125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.625 [2024-12-05 21:23:39.824142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.625 [2024-12-05 21:23:39.824148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.625 [2024-12-05 21:23:39.837037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.625 [2024-12-05 21:23:39.837054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.625 [2024-12-05 21:23:39.837061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.625 [2024-12-05 21:23:39.849284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x662680) 00:30:38.625 [2024-12-05 21:23:39.849301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.625 [2024-12-05 21:23:39.849307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.625 20156.50 IOPS, 78.74 MiB/s 00:30:38.625 Latency(us) 00:30:38.625 [2024-12-05T20:23:40.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.625 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:38.625 nvme0n1 : 2.00 20190.06 78.87 0.00 0.00 6334.69 2184.53 18131.63 00:30:38.625 [2024-12-05T20:23:40.062Z] =================================================================================================================== 00:30:38.625 [2024-12-05T20:23:40.062Z] Total : 20190.06 78.87 0.00 0.00 6334.69 2184.53 18131.63 00:30:38.625 { 00:30:38.625 "results": [ 00:30:38.625 { 00:30:38.625 "job": "nvme0n1", 00:30:38.625 "core_mask": "0x2", 00:30:38.625 "workload": "randread", 00:30:38.625 "status": "finished", 00:30:38.625 "queue_depth": 128, 00:30:38.625 "io_size": 4096, 00:30:38.625 "runtime": 2.003015, 00:30:38.625 "iops": 20190.063479304947, 00:30:38.625 "mibps": 78.86743546603495, 00:30:38.625 "io_failed": 0, 00:30:38.625 "io_timeout": 0, 00:30:38.625 "avg_latency_us": 6334.6866714472935, 00:30:38.625 "min_latency_us": 2184.5333333333333, 00:30:38.625 "max_latency_us": 18131.626666666667 00:30:38.625 } 00:30:38.625 ], 00:30:38.625 "core_count": 1 00:30:38.625 } 00:30:38.625 21:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:38.625 21:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:38.625 21:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:38.625 | .driver_specific 00:30:38.625 | .nvme_error 00:30:38.625 | .status_code 00:30:38.625 | .command_transient_transport_error' 00:30:38.625 21:23:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2286964 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2286964 ']' 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2286964 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2286964 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2286964' 00:30:38.885 killing process with pid 2286964 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2286964 00:30:38.885 Received shutdown signal, test time was about 2.000000 seconds 00:30:38.885 00:30:38.885 Latency(us) 00:30:38.885 [2024-12-05T20:23:40.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.885 [2024-12-05T20:23:40.322Z] =================================================================================================================== 00:30:38.885 [2024-12-05T20:23:40.322Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2286964 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2287701 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2287701 /var/tmp/bperf.sock 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2287701 ']' 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:38.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.885 21:23:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:38.885 [2024-12-05 21:23:40.292185] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:30:38.885 [2024-12-05 21:23:40.292241] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287701 ] 00:30:38.885 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:38.885 Zero copy mechanism will not be used. 00:30:39.145 [2024-12-05 21:23:40.380444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.145 [2024-12-05 21:23:40.409358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.716 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.716 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:39.716 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:39.716 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:39.976 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:39.976 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.976 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:39.976 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.976 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:39.977 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:40.237 nvme0n1 00:30:40.237 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:40.237 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.237 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:40.237 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.237 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:40.238 21:23:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:40.238 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:40.238 Zero copy mechanism will not be used. 00:30:40.238 Running I/O for 2 seconds... 00:30:40.499 [2024-12-05 21:23:41.680121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.680154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.680164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.689608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.689632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.689640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.697640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.697660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.697667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.705953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.705973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.705980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.713954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.713974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.713981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.723155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.723175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.723181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.731556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.731574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.731581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.738086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.738104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.738111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.741096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.741114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.741120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.749379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.749397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.749408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.757548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.757566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.757573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.768642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.768661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.768668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.778569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.778588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.778595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.784405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.784423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.784430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.792554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.792572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.792579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.803243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.803262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.803268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.809691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.809709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.499 [2024-12-05 21:23:41.809716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:40.499 [2024-12-05 21:23:41.815268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.499 [2024-12-05 21:23:41.815287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.815293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.822586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.822609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.822615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.827948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.827967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.827973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.836852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.836877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.836884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.845852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.845876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.845882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.852645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.852664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.852671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.858928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.858946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.858953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.866311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.866329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.866336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.874822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.874840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.874847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.880216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.880234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.880241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.887716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.887734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.887741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.895006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.895025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.895032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.904362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.904381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.904387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.912693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.912710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.912716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.915598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.915615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.915622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.920969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.920986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.920992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:40.500 [2024-12-05 21:23:41.930157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.500 [2024-12-05 21:23:41.930174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.500 [2024-12-05 21:23:41.930181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:41.937695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:41.937712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:41.937719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:41.943072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:41.943089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:41.943098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:41.952424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:41.952441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:41.952448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:41.964737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:41.964754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:41.964760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:41.976366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:41.976384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:41.976390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:41.986790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:41.986807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:41.986814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:41.992203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:41.992221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:41.992227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:41.999683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:41.999700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:41.999707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.005067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.005084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:42.005090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.015051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.015068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:42.015075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.024576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.024596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:42.024603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.036221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.036238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:42.036245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.043657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.043675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:42.043682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.054812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.054830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:42.054837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.065897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.065915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:42.065922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.077061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.077079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:42.077086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.088937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.088955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:42.088962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.099844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.099866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:42.099873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.109204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.109222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:42.109231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.119176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.119193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:42.119200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.129276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.129295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:42.129301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.138401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.138419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:42.138426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.148985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.149003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.761 [2024-12-05 21:23:42.149009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:40.761 [2024-12-05 21:23:42.159040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.761 [2024-12-05 21:23:42.159059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.762 [2024-12-05 21:23:42.159065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:40.762 [2024-12-05 21:23:42.170029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.762 [2024-12-05 21:23:42.170047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.762 [2024-12-05 21:23:42.170053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:40.762 [2024-12-05 21:23:42.180405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.762 [2024-12-05 21:23:42.180424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.762 [2024-12-05 21:23:42.180430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:40.762 [2024-12-05 21:23:42.189765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:40.762 [2024-12-05 21:23:42.189783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.762 [2024-12-05 21:23:42.189789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.200224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.200246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.200252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.211855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.211879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.211885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.223239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.223258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.223264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.233728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.233746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.233753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.244867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.244884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.244890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.254722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.254741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.254747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.263188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.263206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.263213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.271689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.271707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.271714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.283438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.283456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.283462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.293374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.293392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.293399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.302546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.302564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.302571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.309555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.309574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.309580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.318474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.318492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.318498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.326832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.326851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.326857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.334465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.334482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.334488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.344060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.344078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.344085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.354483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.354501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.354508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.366012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.366030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.366039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.376408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.376426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.376433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.385739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.385757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.385764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.393793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.393811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.393817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.403195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.403213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.403219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.414384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.414402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.414408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.423709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.423727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.423733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.433839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.433857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.433868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.444410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.444428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.444435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.022 [2024-12-05 21:23:42.453811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.022 [2024-12-05 21:23:42.453832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.022 [2024-12-05 21:23:42.453839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.282 [2024-12-05 21:23:42.463471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.282 [2024-12-05 21:23:42.463490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.282 [2024-12-05 21:23:42.463496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.282 [2024-12-05 21:23:42.471918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.282 [2024-12-05 21:23:42.471936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.282 [2024-12-05 21:23:42.471942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.282 [2024-12-05 21:23:42.481472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.282 [2024-12-05 21:23:42.481490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.282 [2024-12-05 21:23:42.481496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.282 [2024-12-05 21:23:42.490640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.282 [2024-12-05 21:23:42.490658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.282 [2024-12-05 21:23:42.490664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.282 [2024-12-05 21:23:42.500543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.282 [2024-12-05 21:23:42.500561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.282 [2024-12-05 21:23:42.500567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.282 [2024-12-05 21:23:42.511034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.282 [2024-12-05 21:23:42.511052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.282 [2024-12-05 21:23:42.511058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.282 [2024-12-05 21:23:42.520056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.282 [2024-12-05 21:23:42.520074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.282 [2024-12-05 21:23:42.520081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.282 [2024-12-05 21:23:42.530081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.282 [2024-12-05 21:23:42.530099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.282 [2024-12-05 21:23:42.530106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.282 [2024-12-05 21:23:42.538397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.282 [2024-12-05 21:23:42.538415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.282 [2024-12-05 21:23:42.538421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.282 [2024-12-05 21:23:42.549527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.282 [2024-12-05 21:23:42.549546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.282 [2024-12-05 21:23:42.549552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.282 [2024-12-05 21:23:42.559093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.282 [2024-12-05 21:23:42.559111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.282 [2024-12-05 21:23:42.559117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.282 [2024-12-05 21:23:42.568499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.282 [2024-12-05 21:23:42.568518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.282 [2024-12-05 21:23:42.568524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.282 [2024-12-05 21:23:42.578195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.282 [2024-12-05 21:23:42.578214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.282 [2024-12-05 21:23:42.578220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.282 [2024-12-05 21:23:42.588480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.282 [2024-12-05 21:23:42.588498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.282 [2024-12-05 21:23:42.588505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.282 [2024-12-05 21:23:42.599681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.283 [2024-12-05 21:23:42.599698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.283 [2024-12-05 21:23:42.599705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.283 [2024-12-05 21:23:42.610231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.283 [2024-12-05 21:23:42.610249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.283 [2024-12-05 21:23:42.610255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.283 [2024-12-05 21:23:42.621006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.283 [2024-12-05 21:23:42.621025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.283 [2024-12-05 21:23:42.621034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.283 [2024-12-05 21:23:42.631579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.283 [2024-12-05 21:23:42.631597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.283 [2024-12-05 21:23:42.631604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.283 [2024-12-05 21:23:42.640688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.283 [2024-12-05 21:23:42.640706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.283 [2024-12-05 21:23:42.640712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.283 [2024-12-05 21:23:42.650330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.283 [2024-12-05 21:23:42.650348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.283 [2024-12-05 21:23:42.650354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.283 [2024-12-05 21:23:42.659094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.283 [2024-12-05 21:23:42.659112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.283 [2024-12-05 21:23:42.659119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.283 [2024-12-05 21:23:42.669477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.283 [2024-12-05 21:23:42.669495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.283 [2024-12-05 21:23:42.669501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.283 3393.00 IOPS, 424.12 MiB/s [2024-12-05T20:23:42.720Z] [2024-12-05 21:23:42.681663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.283 [2024-12-05 21:23:42.681681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.283 [2024-12-05 21:23:42.681688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.283 [2024-12-05 21:23:42.691201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.283 [2024-12-05 21:23:42.691219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.283 [2024-12-05 21:23:42.691225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.283 [2024-12-05 21:23:42.704374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.283 [2024-12-05 21:23:42.704392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.283 [2024-12-05 21:23:42.704398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.283 [2024-12-05 21:23:42.716903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.283 [2024-12-05 21:23:42.716921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.283 [2024-12-05 21:23:42.716927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.727064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.727082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.727088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.736951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.736969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.736978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.746643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.746661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.746667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.756270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.756288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.756295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.765776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.765795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.765801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.776543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.776562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.776568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.786753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.786772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.786779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.796903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.796921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.796931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.805881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.805899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.805906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.812730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.812748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.812754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.818924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.818943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.818949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.827977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.827994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.828001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.839488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.839506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.839512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.849152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.849170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.849176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.860009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.860027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.860033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.870061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.870080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.870086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.880477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.880501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.880508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.890121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.890140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.890146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.899739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.899758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.899765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.908302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.908321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.908327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.917082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.917100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.917107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.928733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.928753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.928760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.940959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.940978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.940985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.951940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.543 [2024-12-05 21:23:42.951959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.543 [2024-12-05 21:23:42.951965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.543 [2024-12-05 21:23:42.962975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.544 [2024-12-05 21:23:42.962993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.544 [2024-12-05 21:23:42.963000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.544 [2024-12-05 21:23:42.974246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.544 [2024-12-05 21:23:42.974265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.544 [2024-12-05 21:23:42.974271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.803 [2024-12-05 21:23:42.983966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.803 [2024-12-05 21:23:42.983985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.803 [2024-12-05 21:23:42.983991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.803 [2024-12-05 21:23:42.995088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.803 [2024-12-05 21:23:42.995106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.803 [2024-12-05 21:23:42.995112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.803 [2024-12-05 21:23:43.005630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.803 [2024-12-05 21:23:43.005648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.803 [2024-12-05 21:23:43.005654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.803 [2024-12-05 21:23:43.013951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.803 [2024-12-05 21:23:43.013969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.803 [2024-12-05 21:23:43.013975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.803 [2024-12-05 21:23:43.024549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.803 [2024-12-05 21:23:43.024567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.803 [2024-12-05 21:23:43.024573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.803 [2024-12-05 21:23:43.035557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.803 [2024-12-05 21:23:43.035576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.803 [2024-12-05 21:23:43.035582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.803 [2024-12-05 21:23:43.044578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.803 [2024-12-05 21:23:43.044597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.803 [2024-12-05 21:23:43.044603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.803 [2024-12-05 21:23:43.053295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.803 [2024-12-05 21:23:43.053313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.803 [2024-12-05 21:23:43.053323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.803 [2024-12-05 21:23:43.062541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.803 [2024-12-05 21:23:43.062560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.803 [2024-12-05 21:23:43.062567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.803 [2024-12-05 21:23:43.073018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.803 [2024-12-05 21:23:43.073036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.803 [2024-12-05 21:23:43.073043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.803 [2024-12-05 21:23:43.080980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.803 [2024-12-05 21:23:43.080999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.803 [2024-12-05 21:23:43.081005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.803 [2024-12-05 21:23:43.091035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.803 [2024-12-05 21:23:43.091054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.803 [2024-12-05 21:23:43.091060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.803 [2024-12-05 21:23:43.099362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.803 [2024-12-05 21:23:43.099380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.803 [2024-12-05 21:23:43.099387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.803 [2024-12-05 21:23:43.107940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.803 [2024-12-05 21:23:43.107958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.804 [2024-12-05 21:23:43.107964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.804 [2024-12-05 21:23:43.118187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.804 [2024-12-05 21:23:43.118204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.804 [2024-12-05 21:23:43.118210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.804 [2024-12-05 21:23:43.128171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.804 [2024-12-05 21:23:43.128189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.804 [2024-12-05 21:23:43.128196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.804 [2024-12-05 21:23:43.139086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.804 [2024-12-05 21:23:43.139105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.804 [2024-12-05 21:23:43.139111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.804 [2024-12-05 21:23:43.150553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.804 [2024-12-05 21:23:43.150570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.804 [2024-12-05 21:23:43.150577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.804 [2024-12-05 21:23:43.159643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.804 [2024-12-05 21:23:43.159661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.804 [2024-12-05 21:23:43.159667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.804 [2024-12-05 21:23:43.169407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.804 [2024-12-05 21:23:43.169426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.804 [2024-12-05 21:23:43.169432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.804 [2024-12-05 21:23:43.178546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.804 [2024-12-05 21:23:43.178564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.804 [2024-12-05 21:23:43.178570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.804 [2024-12-05 21:23:43.187987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.804 [2024-12-05 21:23:43.188005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.804 [2024-12-05 21:23:43.188012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:41.804 [2024-12-05 21:23:43.197426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.804 [2024-12-05 21:23:43.197444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.804 [2024-12-05 21:23:43.197450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:41.804 [2024-12-05 21:23:43.208404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.804 [2024-12-05 21:23:43.208422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.804 [2024-12-05 21:23:43.208429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:41.804 [2024-12-05 21:23:43.220163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.804 [2024-12-05 21:23:43.220181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.804 [2024-12-05 21:23:43.220191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:41.804 [2024-12-05 21:23:43.233480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:41.804 [2024-12-05 21:23:43.233498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.804 [2024-12-05 21:23:43.233505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.245994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.246012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.246018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.258579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.258598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.258604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.271272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.271291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.271297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.284608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.284625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.284631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.296482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.296501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.296507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.309382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.309401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.309407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.321752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.321769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.321775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.334301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.334323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.334329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.345635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.345652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.345659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.356037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.356055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.356061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.366927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.366946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.366952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.377767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.377786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.377792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.386701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.386721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.386727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.396497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.396515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.396521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.406333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.406352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.406358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.412529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.412547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.412553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.421890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.421907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.421914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.430562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.430580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.430586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.442061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.442080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.442087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.453417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.453436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.453442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.465385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.465404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.465410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.476225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.476243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.476249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:42.064 [2024-12-05 21:23:43.486796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.064 [2024-12-05 21:23:43.486815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.064 [2024-12-05 21:23:43.486821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:42.324 [2024-12-05 21:23:43.499171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.324 [2024-12-05 21:23:43.499191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.324 [2024-12-05 21:23:43.499197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:42.324 [2024-12-05 21:23:43.510172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.324 [2024-12-05 21:23:43.510191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.324 [2024-12-05 21:23:43.510201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:42.324 [2024-12-05 21:23:43.520110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.324 [2024-12-05 21:23:43.520128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.324 [2024-12-05 21:23:43.520135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:42.325 [2024-12-05 21:23:43.530663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.325 [2024-12-05 21:23:43.530681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.325 [2024-12-05 21:23:43.530687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:42.325 [2024-12-05 21:23:43.540999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.325 [2024-12-05 21:23:43.541018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.325 [2024-12-05 21:23:43.541024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:42.325 [2024-12-05 21:23:43.551081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.325 [2024-12-05 21:23:43.551100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.325 [2024-12-05 21:23:43.551106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:42.325 [2024-12-05 21:23:43.561765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.325 [2024-12-05 21:23:43.561784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.325 [2024-12-05 21:23:43.561791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:42.325 [2024-12-05 21:23:43.572946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.325 [2024-12-05 21:23:43.572964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.325 [2024-12-05 21:23:43.572971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:42.325 [2024-12-05 21:23:43.584688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.325 [2024-12-05 21:23:43.584707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.325 [2024-12-05 21:23:43.584713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:42.325 [2024-12-05 21:23:43.595692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.325 [2024-12-05 21:23:43.595710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.325 [2024-12-05 21:23:43.595717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:42.325 [2024-12-05 21:23:43.607339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.325 [2024-12-05 21:23:43.607360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.325 [2024-12-05 21:23:43.607367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:42.325 [2024-12-05 21:23:43.620249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.325 [2024-12-05 21:23:43.620268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.325 [2024-12-05 21:23:43.620274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:42.325 [2024-12-05 21:23:43.633017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.325 [2024-12-05 21:23:43.633035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.325 [2024-12-05 21:23:43.633042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:42.325 [2024-12-05 21:23:43.645865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.325 [2024-12-05 21:23:43.645883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.325 [2024-12-05 21:23:43.645890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:42.325 [2024-12-05 21:23:43.656348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.325 [2024-12-05 21:23:43.656366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.325 [2024-12-05 21:23:43.656372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:42.325 [2024-12-05 21:23:43.665195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.325 [2024-12-05 21:23:43.665212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.325 [2024-12-05 21:23:43.665218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:42.325 [2024-12-05 21:23:43.675652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15011b0) 00:30:42.325 [2024-12-05 21:23:43.675670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:42.325 [2024-12-05 21:23:43.675676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:42.325 3185.00 IOPS, 398.12 MiB/s 00:30:42.325 Latency(us) 00:30:42.325 [2024-12-05T20:23:43.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.325 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:42.325 nvme0n1 : 2.00 3188.10 398.51 0.00 0.00 5015.72 1126.40 13271.04 00:30:42.325 [2024-12-05T20:23:43.762Z] =================================================================================================================== 00:30:42.325 [2024-12-05T20:23:43.762Z] Total : 3188.10 398.51 0.00 0.00 5015.72 1126.40 13271.04 00:30:42.325 { 00:30:42.325 "results": [ 00:30:42.325 { 00:30:42.325 "job": "nvme0n1", 00:30:42.325 "core_mask": "0x2", 00:30:42.325 "workload": "randread", 00:30:42.325 "status": "finished", 00:30:42.325 "queue_depth": 16, 00:30:42.325 "io_size": 131072, 00:30:42.325 "runtime": 2.003076, 00:30:42.325 "iops": 3188.0967072642275, 00:30:42.325 "mibps": 398.51208840802843, 00:30:42.325 "io_failed": 0, 00:30:42.325 "io_timeout": 0, 00:30:42.325 "avg_latency_us": 5015.719085499531, 00:30:42.325 "min_latency_us": 1126.4, 00:30:42.325 "max_latency_us": 13271.04 00:30:42.325 } 00:30:42.325 ], 00:30:42.325 "core_count": 1 00:30:42.325 } 00:30:42.325 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:42.325 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:42.325 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:42.325 | .driver_specific 00:30:42.325 | .nvme_error 00:30:42.325 | .status_code 00:30:42.325 | .command_transient_transport_error' 00:30:42.325 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:42.584 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 206 > 0 )) 00:30:42.584 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2287701 00:30:42.584 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2287701 ']' 00:30:42.584 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2287701 00:30:42.584 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:42.584 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:42.584 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2287701 00:30:42.584 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:42.584 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:42.584 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2287701' 00:30:42.584 killing process with pid 2287701 00:30:42.584 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2287701 00:30:42.584 Received shutdown signal, test time was about 2.000000 seconds 00:30:42.584 00:30:42.584 Latency(us) 00:30:42.584 [2024-12-05T20:23:44.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.584 [2024-12-05T20:23:44.021Z] =================================================================================================================== 00:30:42.584 [2024-12-05T20:23:44.021Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:42.584 21:23:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2287701 00:30:42.844 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:42.844 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:42.844 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:42.844 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:42.844 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:42.844 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2288390 00:30:42.844 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2288390 /var/tmp/bperf.sock 00:30:42.844 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2288390 ']' 00:30:42.844 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:42.844 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:42.844 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:42.844 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:42.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:42.844 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:42.844 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:42.844 [2024-12-05 21:23:44.108273] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:30:42.844 [2024-12-05 21:23:44.108332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2288390 ] 00:30:42.844 [2024-12-05 21:23:44.199078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.844 [2024-12-05 21:23:44.227281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.783 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:43.783 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:43.783 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:43.783 21:23:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:43.783 21:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:43.783 21:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.783 21:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:43.783 21:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.783 21:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:43.783 21:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:44.043 nvme0n1 00:30:44.043 21:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:44.043 21:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.043 21:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:44.043 21:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.043 21:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:44.043 21:23:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:44.043 Running I/O for 2 seconds... 00:30:44.043 [2024-12-05 21:23:45.472668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef3a28 00:30:44.043 [2024-12-05 21:23:45.474780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.043 [2024-12-05 21:23:45.474808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.483000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef31b8 00:30:44.303 [2024-12-05 21:23:45.484431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.484449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.494919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef31b8 00:30:44.303 [2024-12-05 21:23:45.496333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.496349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.506823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef31b8 00:30:44.303 [2024-12-05 21:23:45.508244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.508260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.518726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef31b8 00:30:44.303 [2024-12-05 21:23:45.520143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.520160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.529821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee23b8 00:30:44.303 [2024-12-05 21:23:45.531222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.531238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.542464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee23b8 00:30:44.303 [2024-12-05 21:23:45.543869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.543885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.554564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee23b8 00:30:44.303 [2024-12-05 21:23:45.555935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.555950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.566404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.303 [2024-12-05 21:23:45.567791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.567807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.578291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.303 [2024-12-05 21:23:45.579678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.579694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.590143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.303 [2024-12-05 21:23:45.591533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.591549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.601989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.303 [2024-12-05 21:23:45.603377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.603392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.613846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.303 [2024-12-05 21:23:45.615236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.615252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.625720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.303 [2024-12-05 21:23:45.627116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.627132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.637558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.303 [2024-12-05 21:23:45.638934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.638950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.649409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.303 [2024-12-05 21:23:45.650768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.650783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.661265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.303 [2024-12-05 21:23:45.662652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.662669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.673111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.303 [2024-12-05 21:23:45.674499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.303 [2024-12-05 21:23:45.674515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.303 [2024-12-05 21:23:45.684967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.303 [2024-12-05 21:23:45.686353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.304 [2024-12-05 21:23:45.686371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.304 [2024-12-05 21:23:45.696969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.304 [2024-12-05 21:23:45.698361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.304 [2024-12-05 21:23:45.698377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.304 [2024-12-05 21:23:45.708841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.304 [2024-12-05 21:23:45.710232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.304 [2024-12-05 21:23:45.710247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.304 [2024-12-05 21:23:45.720693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.304 [2024-12-05 21:23:45.722067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.304 [2024-12-05 21:23:45.722083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.304 [2024-12-05 21:23:45.732562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.304 [2024-12-05 21:23:45.733930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.304 [2024-12-05 21:23:45.733946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.744410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.564 [2024-12-05 21:23:45.745798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.564 [2024-12-05 21:23:45.745813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.756267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.564 [2024-12-05 21:23:45.757657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.564 [2024-12-05 21:23:45.757672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.768120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.564 [2024-12-05 21:23:45.769501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.564 [2024-12-05 21:23:45.769517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.779960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.564 [2024-12-05 21:23:45.781352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.564 [2024-12-05 21:23:45.781368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.791819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef2948 00:30:44.564 [2024-12-05 21:23:45.793214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.564 [2024-12-05 21:23:45.793229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.802887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee1b48 00:30:44.564 [2024-12-05 21:23:45.804254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.564 [2024-12-05 21:23:45.804269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.815487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee1b48 00:30:44.564 [2024-12-05 21:23:45.816854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.564 [2024-12-05 21:23:45.816872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.827327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee1b48 00:30:44.564 [2024-12-05 21:23:45.828703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.564 [2024-12-05 21:23:45.828719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.839174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee1b48 00:30:44.564 [2024-12-05 21:23:45.840547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.564 [2024-12-05 21:23:45.840563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.851015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee1b48 00:30:44.564 [2024-12-05 21:23:45.852350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.564 [2024-12-05 21:23:45.852366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.862807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef20d8 00:30:44.564 [2024-12-05 21:23:45.864178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.564 [2024-12-05 21:23:45.864194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.874686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef20d8 00:30:44.564 [2024-12-05 21:23:45.876066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.564 [2024-12-05 21:23:45.876082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.886541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef20d8 00:30:44.564 [2024-12-05 21:23:45.887872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.564 [2024-12-05 21:23:45.887888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.897773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee49b0 00:30:44.564 [2024-12-05 21:23:45.899113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.564 [2024-12-05 21:23:45.899129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.907699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee5220 00:30:44.564 [2024-12-05 21:23:45.908553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.564 [2024-12-05 21:23:45.908569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:44.564 [2024-12-05 21:23:45.920404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee5220 00:30:44.564 [2024-12-05 21:23:45.921272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.565 [2024-12-05 21:23:45.921288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:44.565 [2024-12-05 21:23:45.932326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eed4e8 00:30:44.565 [2024-12-05 21:23:45.933174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.565 [2024-12-05 21:23:45.933190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:44.565 [2024-12-05 21:23:45.945936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee12d8 00:30:44.565 [2024-12-05 21:23:45.947444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.565 [2024-12-05 21:23:45.947459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:44.565 [2024-12-05 21:23:45.955859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef1868 00:30:44.565 [2024-12-05 21:23:45.956874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.565 [2024-12-05 21:23:45.956890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:44.565 [2024-12-05 21:23:45.968659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016edf118 00:30:44.565 [2024-12-05 21:23:45.969508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.565 [2024-12-05 21:23:45.969524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:44.565 [2024-12-05 21:23:45.980857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:44.565 [2024-12-05 21:23:45.982058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.565 [2024-12-05 21:23:45.982074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.565 [2024-12-05 21:23:45.992711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:44.565 [2024-12-05 21:23:45.993903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.565 [2024-12-05 21:23:45.993922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.004564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:44.825 [2024-12-05 21:23:46.005765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.005781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.016420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:44.825 [2024-12-05 21:23:46.017611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.017627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.028256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:44.825 [2024-12-05 21:23:46.029448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.029464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.040097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:44.825 [2024-12-05 21:23:46.041287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.041302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.053436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:44.825 [2024-12-05 21:23:46.055271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.055286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.063736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee95a0 00:30:44.825 [2024-12-05 21:23:46.064917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.064934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.075579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee95a0 00:30:44.825 [2024-12-05 21:23:46.076763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.076779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.087413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee95a0 00:30:44.825 [2024-12-05 21:23:46.088597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.088613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.099238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee95a0 00:30:44.825 [2024-12-05 21:23:46.100418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.100434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.111098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee95a0 00:30:44.825 [2024-12-05 21:23:46.112274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.112290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.122950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee95a0 00:30:44.825 [2024-12-05 21:23:46.124126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.124143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.134788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee95a0 00:30:44.825 [2024-12-05 21:23:46.135970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.135986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.146618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee95a0 00:30:44.825 [2024-12-05 21:23:46.147791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.147807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.158459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee95a0 00:30:44.825 [2024-12-05 21:23:46.159636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.159652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.170312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee95a0 00:30:44.825 [2024-12-05 21:23:46.171497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.171512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.182174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee95a0 00:30:44.825 [2024-12-05 21:23:46.183354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.183369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.194041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee95a0 00:30:44.825 [2024-12-05 21:23:46.195186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.195201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.205853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:44.825 [2024-12-05 21:23:46.207028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.207044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.217778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:44.825 [2024-12-05 21:23:46.218942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.825 [2024-12-05 21:23:46.218958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.825 [2024-12-05 21:23:46.229621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:44.826 [2024-12-05 21:23:46.230791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.826 [2024-12-05 21:23:46.230807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.826 [2024-12-05 21:23:46.241473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:44.826 [2024-12-05 21:23:46.242647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.826 [2024-12-05 21:23:46.242662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:44.826 [2024-12-05 21:23:46.253324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:44.826 [2024-12-05 21:23:46.254495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.826 [2024-12-05 21:23:46.254511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.086 [2024-12-05 21:23:46.265191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:45.086 [2024-12-05 21:23:46.266354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.086 [2024-12-05 21:23:46.266370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.086 [2024-12-05 21:23:46.277082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:45.086 [2024-12-05 21:23:46.278233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.086 [2024-12-05 21:23:46.278249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.086 [2024-12-05 21:23:46.288920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:45.086 [2024-12-05 21:23:46.290101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.086 [2024-12-05 21:23:46.290116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.086 [2024-12-05 21:23:46.300772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:45.086 [2024-12-05 21:23:46.301928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.086 [2024-12-05 21:23:46.301947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.086 [2024-12-05 21:23:46.312624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:45.086 [2024-12-05 21:23:46.313792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.086 [2024-12-05 21:23:46.313809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.086 [2024-12-05 21:23:46.324473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:45.086 [2024-12-05 21:23:46.325644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.086 [2024-12-05 21:23:46.325660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.086 [2024-12-05 21:23:46.336312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef9f68 00:30:45.086 [2024-12-05 21:23:46.337455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.337471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:45.087 [2024-12-05 21:23:46.348103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee4de8 00:30:45.087 [2024-12-05 21:23:46.349260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.349276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:45.087 [2024-12-05 21:23:46.359137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016efa7d8 00:30:45.087 [2024-12-05 21:23:46.360262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.360278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.087 [2024-12-05 21:23:46.371733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016efa7d8 00:30:45.087 [2024-12-05 21:23:46.372880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.372896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:45.087 [2024-12-05 21:23:46.383580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016efa7d8 00:30:45.087 [2024-12-05 21:23:46.384730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.384746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:45.087 [2024-12-05 21:23:46.397005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016efa7d8 00:30:45.087 [2024-12-05 21:23:46.398806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.398822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:45.087 [2024-12-05 21:23:46.407337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eea680 00:30:45.087 [2024-12-05 21:23:46.408485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.408501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:45.087 [2024-12-05 21:23:46.419206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eea680 00:30:45.087 [2024-12-05 21:23:46.420348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.420365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:45.087 [2024-12-05 21:23:46.431071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eea680 00:30:45.087 [2024-12-05 21:23:46.432212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.432228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:45.087 [2024-12-05 21:23:46.442939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eea680 00:30:45.087 [2024-12-05 21:23:46.444084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.444100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:45.087 21353.00 IOPS, 83.41 MiB/s [2024-12-05T20:23:46.524Z] [2024-12-05 21:23:46.456290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eea680 00:30:45.087 [2024-12-05 21:23:46.458046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.458061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:45.087 [2024-12-05 21:23:46.467009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8a50 00:30:45.087 [2024-12-05 21:23:46.468304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.468320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:45.087 [2024-12-05 21:23:46.479011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef0788 00:30:45.087 [2024-12-05 21:23:46.480298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.480313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:45.087 [2024-12-05 21:23:46.490915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef0788 00:30:45.087 [2024-12-05 21:23:46.492209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.492225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:45.087 [2024-12-05 21:23:46.502784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee23b8 00:30:45.087 [2024-12-05 21:23:46.504097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.504113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.087 [2024-12-05 21:23:46.513918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016efef90 00:30:45.087 [2024-12-05 21:23:46.515195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.087 [2024-12-05 21:23:46.515212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:45.349 [2024-12-05 21:23:46.528137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016efda78 00:30:45.349 [2024-12-05 21:23:46.530078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.349 [2024-12-05 21:23:46.530094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:45.349 [2024-12-05 21:23:46.538453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8e88 00:30:45.349 [2024-12-05 21:23:46.539737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.349 [2024-12-05 21:23:46.539752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.349 [2024-12-05 21:23:46.550312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8e88 00:30:45.349 [2024-12-05 21:23:46.551613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.349 [2024-12-05 21:23:46.551628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.349 [2024-12-05 21:23:46.562372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8e88 00:30:45.349 [2024-12-05 21:23:46.563655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.349 [2024-12-05 21:23:46.563671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.349 [2024-12-05 21:23:46.574250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8e88 00:30:45.349 [2024-12-05 21:23:46.575534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.349 [2024-12-05 21:23:46.575550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.349 [2024-12-05 21:23:46.586127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8e88 00:30:45.349 [2024-12-05 21:23:46.587414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.349 [2024-12-05 21:23:46.587430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.349 [2024-12-05 21:23:46.597985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8e88 00:30:45.349 [2024-12-05 21:23:46.599301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.349 [2024-12-05 21:23:46.599317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.349 [2024-12-05 21:23:46.609855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8e88 00:30:45.349 [2024-12-05 21:23:46.611153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.349 [2024-12-05 21:23:46.611172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.349 [2024-12-05 21:23:46.621710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8e88 00:30:45.349 [2024-12-05 21:23:46.622994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.349 [2024-12-05 21:23:46.623011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.349 [2024-12-05 21:23:46.633586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8e88 00:30:45.350 [2024-12-05 21:23:46.634907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.350 [2024-12-05 21:23:46.634923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.350 [2024-12-05 21:23:46.645489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8e88 00:30:45.350 [2024-12-05 21:23:46.646775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.350 [2024-12-05 21:23:46.646791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.350 [2024-12-05 21:23:46.657357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8e88 00:30:45.350 [2024-12-05 21:23:46.658647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.350 [2024-12-05 21:23:46.658663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.350 [2024-12-05 21:23:46.669234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8e88 00:30:45.350 [2024-12-05 21:23:46.670520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.350 [2024-12-05 21:23:46.670536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.350 [2024-12-05 21:23:46.681087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8e88 00:30:45.350 [2024-12-05 21:23:46.682371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.350 [2024-12-05 21:23:46.682387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.350 [2024-12-05 21:23:46.692950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8e88 00:30:45.350 [2024-12-05 21:23:46.694235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.350 [2024-12-05 21:23:46.694251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.350 [2024-12-05 21:23:46.706314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ef8e88 00:30:45.350 [2024-12-05 21:23:46.708247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.350 [2024-12-05 21:23:46.708263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:45.350 [2024-12-05 21:23:46.716773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016efe2e8 00:30:45.350 [2024-12-05 21:23:46.718052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.350 [2024-12-05 21:23:46.718069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:45.350 [2024-12-05 21:23:46.730149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016efe2e8 00:30:45.350 [2024-12-05 21:23:46.732031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.350 [2024-12-05 21:23:46.732047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:45.350 [2024-12-05 21:23:46.740890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eeee38 00:30:45.350 [2024-12-05 21:23:46.742316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.350 [2024-12-05 21:23:46.742332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:45.350 [2024-12-05 21:23:46.752921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.350 [2024-12-05 21:23:46.754347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.350 [2024-12-05 21:23:46.754363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.350 [2024-12-05 21:23:46.764804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.350 [2024-12-05 21:23:46.766236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.350 [2024-12-05 21:23:46.766252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.350 [2024-12-05 21:23:46.776685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.350 [2024-12-05 21:23:46.778078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.350 [2024-12-05 21:23:46.778095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.611 [2024-12-05 21:23:46.788546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.611 [2024-12-05 21:23:46.789975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.611 [2024-12-05 21:23:46.789992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.611 [2024-12-05 21:23:46.800406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.611 [2024-12-05 21:23:46.801837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.611 [2024-12-05 21:23:46.801853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.611 [2024-12-05 21:23:46.812280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.611 [2024-12-05 21:23:46.813715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.611 [2024-12-05 21:23:46.813731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.611 [2024-12-05 21:23:46.824162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.611 [2024-12-05 21:23:46.825587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.611 [2024-12-05 21:23:46.825603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.611 [2024-12-05 21:23:46.836034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.611 [2024-12-05 21:23:46.837463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.611 [2024-12-05 21:23:46.837479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.611 [2024-12-05 21:23:46.847908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.611 [2024-12-05 21:23:46.849340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.611 [2024-12-05 21:23:46.849356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.611 [2024-12-05 21:23:46.859763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.611 [2024-12-05 21:23:46.861193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.611 [2024-12-05 21:23:46.861209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.611 [2024-12-05 21:23:46.871646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.611 [2024-12-05 21:23:46.873047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.611 [2024-12-05 21:23:46.873063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.611 [2024-12-05 21:23:46.883527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.611 [2024-12-05 21:23:46.884929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.611 [2024-12-05 21:23:46.884945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.611 [2024-12-05 21:23:46.895399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.611 [2024-12-05 21:23:46.896827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.611 [2024-12-05 21:23:46.896843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.611 [2024-12-05 21:23:46.907281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.612 [2024-12-05 21:23:46.908713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.612 [2024-12-05 21:23:46.908729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.612 [2024-12-05 21:23:46.919154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.612 [2024-12-05 21:23:46.920583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.612 [2024-12-05 21:23:46.920602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.612 [2024-12-05 21:23:46.931009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.612 [2024-12-05 21:23:46.932439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.612 [2024-12-05 21:23:46.932456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.612 [2024-12-05 21:23:46.942891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.612 [2024-12-05 21:23:46.944322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.612 [2024-12-05 21:23:46.944339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.612 [2024-12-05 21:23:46.954755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.612 [2024-12-05 21:23:46.956195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.612 [2024-12-05 21:23:46.956212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.612 [2024-12-05 21:23:46.966628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.612 [2024-12-05 21:23:46.968060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.612 [2024-12-05 21:23:46.968076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.612 [2024-12-05 21:23:46.978485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.612 [2024-12-05 21:23:46.979911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.612 [2024-12-05 21:23:46.979927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.612 [2024-12-05 21:23:46.990339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.612 [2024-12-05 21:23:46.991765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.612 [2024-12-05 21:23:46.991780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.612 [2024-12-05 21:23:47.002189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.612 [2024-12-05 21:23:47.003577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.612 [2024-12-05 21:23:47.003592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.612 [2024-12-05 21:23:47.014075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eff3c8 00:30:45.612 [2024-12-05 21:23:47.015465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.612 [2024-12-05 21:23:47.015481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:45.612 [2024-12-05 21:23:47.025880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eef6a8 00:30:45.612 [2024-12-05 21:23:47.027302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.612 [2024-12-05 21:23:47.027318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:45.612 [2024-12-05 21:23:47.037702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee0ea0 00:30:45.612 [2024-12-05 21:23:47.039109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.612 [2024-12-05 21:23:47.039125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:45.873 [2024-12-05 21:23:47.051057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee0ea0 00:30:45.873 [2024-12-05 21:23:47.053113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.873 [2024-12-05 21:23:47.053128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:45.873 [2024-12-05 21:23:47.061359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eeff18 00:30:45.873 [2024-12-05 21:23:47.062765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.873 [2024-12-05 21:23:47.062781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:45.873 [2024-12-05 21:23:47.073213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eeff18 00:30:45.873 [2024-12-05 21:23:47.074617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.873 [2024-12-05 21:23:47.074632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:45.873 [2024-12-05 21:23:47.086561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016eeff18 00:30:45.873 [2024-12-05 21:23:47.088607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.873 [2024-12-05 21:23:47.088623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:45.873 [2024-12-05 21:23:47.096154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee3060 00:30:45.873 [2024-12-05 21:23:47.097532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.873 [2024-12-05 21:23:47.097547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:45.873 [2024-12-05 21:23:47.108777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee3060 00:30:45.873 [2024-12-05 21:23:47.110171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.873 [2024-12-05 21:23:47.110187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:45.873 [2024-12-05 21:23:47.119601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016efdeb0 00:30:45.873 [2024-12-05 21:23:47.120529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.873 [2024-12-05 21:23:47.120545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:45.873 [2024-12-05 21:23:47.132239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee5a90 00:30:45.873 [2024-12-05 21:23:47.133775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.873 [2024-12-05 21:23:47.133791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:45.873 [2024-12-05 21:23:47.142556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016efac10 00:30:45.873 [2024-12-05 21:23:47.143436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.873 [2024-12-05 21:23:47.143452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:45.873 [2024-12-05 21:23:47.154422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016efac10 00:30:45.873 [2024-12-05 21:23:47.155305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.873 [2024-12-05 21:23:47.155321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:45.873 [2024-12-05 21:23:47.166271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016efac10 00:30:45.873 [2024-12-05 21:23:47.167154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.873 [2024-12-05 21:23:47.167170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:45.873 [2024-12-05 21:23:47.178127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016efac10 00:30:45.874 [2024-12-05 21:23:47.179006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.874 [2024-12-05 21:23:47.179021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:45.874 [2024-12-05 21:23:47.189924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee5220 00:30:45.874 [2024-12-05 21:23:47.190790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.874 [2024-12-05 21:23:47.190805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:45.874 [2024-12-05 21:23:47.201785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee5220 00:30:45.874 [2024-12-05 21:23:47.202653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.874 [2024-12-05 21:23:47.202668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:45.874 [2024-12-05 21:23:47.213643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee5220 00:30:45.874 [2024-12-05 21:23:47.214472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.874 [2024-12-05 21:23:47.214488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:45.874 [2024-12-05 21:23:47.225564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee5220 00:30:45.874 [2024-12-05 21:23:47.226432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.874 [2024-12-05 21:23:47.226451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:45.874 [2024-12-05 21:23:47.237408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee5220 00:30:45.874 [2024-12-05 21:23:47.238278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.874 [2024-12-05 21:23:47.238294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:45.874 [2024-12-05 21:23:47.250750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee5220 00:30:45.874 [2024-12-05 21:23:47.252274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.874 [2024-12-05 21:23:47.252289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:45.874 [2024-12-05 21:23:47.261065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:45.874 [2024-12-05 21:23:47.261924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.874 [2024-12-05 21:23:47.261940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:45.874 [2024-12-05 21:23:47.272928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:45.874 [2024-12-05 21:23:47.273780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.874 [2024-12-05 21:23:47.273796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:45.874 [2024-12-05 21:23:47.284798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:45.874 [2024-12-05 21:23:47.285657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.874 [2024-12-05 21:23:47.285673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:45.874 [2024-12-05 21:23:47.296669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:45.874 [2024-12-05 21:23:47.297532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.874 [2024-12-05 21:23:47.297547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:46.136 [2024-12-05 21:23:47.308536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:46.136 [2024-12-05 21:23:47.309364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.136 [2024-12-05 21:23:47.309381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:46.136 [2024-12-05 21:23:47.320382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:46.136 [2024-12-05 21:23:47.321242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.136 [2024-12-05 21:23:47.321259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:46.136 [2024-12-05 21:23:47.332229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:46.136 [2024-12-05 21:23:47.333075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.136 [2024-12-05 21:23:47.333092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:46.136 [2024-12-05 21:23:47.344098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:46.136 [2024-12-05 21:23:47.344929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.136 [2024-12-05 21:23:47.344945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:46.136 [2024-12-05 21:23:47.355947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:46.136 [2024-12-05 21:23:47.356765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.136 [2024-12-05 21:23:47.356780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:46.136 [2024-12-05 21:23:47.367803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:46.136 [2024-12-05 21:23:47.368662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.136 [2024-12-05 21:23:47.368677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:46.136 [2024-12-05 21:23:47.379635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:46.136 [2024-12-05 21:23:47.380497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.136 [2024-12-05 21:23:47.380512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:46.136 [2024-12-05 21:23:47.391488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:46.136 [2024-12-05 21:23:47.392355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.136 [2024-12-05 21:23:47.392371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:46.136 [2024-12-05 21:23:47.403342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:46.136 [2024-12-05 21:23:47.404163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.136 [2024-12-05 21:23:47.404179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:46.136 [2024-12-05 21:23:47.415229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:46.136 [2024-12-05 21:23:47.416093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.136 [2024-12-05 21:23:47.416109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:46.136 [2024-12-05 21:23:47.427082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:46.136 [2024-12-05 21:23:47.427928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.137 [2024-12-05 21:23:47.427943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:46.137 [2024-12-05 21:23:47.438945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:46.137 [2024-12-05 21:23:47.439799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.137 [2024-12-05 21:23:47.439814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:46.137 [2024-12-05 21:23:47.450791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af19d0) with pdu=0x200016ee9168 00:30:46.137 [2024-12-05 21:23:47.451625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.137 [2024-12-05 21:23:47.451641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:46.137 21436.50 IOPS, 83.74 MiB/s 00:30:46.137 Latency(us) 00:30:46.137 [2024-12-05T20:23:47.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.137 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:46.137 nvme0n1 : 2.00 21452.51 83.80 0.00 0.00 5958.52 2266.45 15182.51 00:30:46.137 [2024-12-05T20:23:47.574Z] =================================================================================================================== 00:30:46.137 [2024-12-05T20:23:47.574Z] Total : 21452.51 83.80 0.00 0.00 5958.52 2266.45 15182.51 00:30:46.137 { 00:30:46.137 "results": [ 00:30:46.137 { 00:30:46.137 "job": "nvme0n1", 00:30:46.137 "core_mask": "0x2", 00:30:46.137 "workload": "randwrite", 00:30:46.137 "status": "finished", 00:30:46.137 "queue_depth": 128, 00:30:46.137 "io_size": 4096, 00:30:46.137 "runtime": 2.004474, 00:30:46.137 "iops": 21452.510733489184, 00:30:46.137 "mibps": 83.79887005269212, 00:30:46.137 "io_failed": 0, 00:30:46.137 "io_timeout": 0, 00:30:46.137 "avg_latency_us": 5958.515683511237, 00:30:46.137 "min_latency_us": 2266.4533333333334, 00:30:46.137 "max_latency_us": 15182.506666666666 00:30:46.137 } 00:30:46.137 ], 00:30:46.137 "core_count": 1 00:30:46.137 } 00:30:46.137 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:46.137 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:46.137 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:46.137 | .driver_specific 00:30:46.137 | .nvme_error 00:30:46.137 | .status_code 00:30:46.137 | .command_transient_transport_error' 00:30:46.137 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 168 > 0 )) 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2288390 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2288390 ']' 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2288390 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2288390 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2288390' 00:30:46.398 killing process with pid 2288390 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2288390 00:30:46.398 Received shutdown signal, test time was about 2.000000 seconds 00:30:46.398 00:30:46.398 Latency(us) 00:30:46.398 [2024-12-05T20:23:47.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.398 [2024-12-05T20:23:47.835Z] =================================================================================================================== 00:30:46.398 [2024-12-05T20:23:47.835Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2288390 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2289067 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2289067 /var/tmp/bperf.sock 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2289067 ']' 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:46.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:46.398 21:23:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:46.658 [2024-12-05 21:23:47.866008] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:30:46.658 [2024-12-05 21:23:47.866066] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289067 ] 00:30:46.658 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:46.658 Zero copy mechanism will not be used. 00:30:46.658 [2024-12-05 21:23:47.956462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.658 [2024-12-05 21:23:47.985870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.227 21:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:47.227 21:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:47.227 21:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:47.227 21:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:47.487 21:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:47.487 21:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.487 21:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:47.487 21:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.487 21:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:47.487 21:23:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:47.747 nvme0n1 00:30:47.747 21:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:47.747 21:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.747 21:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:48.007 21:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.007 21:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:48.007 21:23:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:48.007 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:48.007 Zero copy mechanism will not be used. 00:30:48.007 Running I/O for 2 seconds... 00:30:48.007 [2024-12-05 21:23:49.273761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.007 [2024-12-05 21:23:49.273855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.007 [2024-12-05 21:23:49.273886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.007 [2024-12-05 21:23:49.283295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.007 [2024-12-05 21:23:49.283614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.007 [2024-12-05 21:23:49.283633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.007 [2024-12-05 21:23:49.292813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.007 [2024-12-05 21:23:49.293082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.293099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.304040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.304322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.304339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.311931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.311989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.312005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.317851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.317967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.317989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.323932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.324008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.324024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.331373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.331704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.331721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.338239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.338320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.338335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.344659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.344906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.344921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.352531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.352591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.352606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.362453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.362526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.362541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.369548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.369616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.369631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.376244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.376303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.376318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.383341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.383650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.383666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.393061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.393125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.393141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.399438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.399515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.399530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.405185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.405283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.405299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.410322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.410387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.410401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.416754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.416815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.416831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.422251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.422309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.422325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.431112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.431213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.431229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.436841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.436927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.436942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.008 [2024-12-05 21:23:49.441897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.008 [2024-12-05 21:23:49.441976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.008 [2024-12-05 21:23:49.441992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.449938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.450207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.450222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.457279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.457349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.457365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.462887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.462981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.462996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.467707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.467796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.467812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.473802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.473913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.473928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.483582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.483658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.483674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.489703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.490012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.490028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.495223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.495512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.495532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.502562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.502848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.502868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.509291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.509357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.509372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.514891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.514956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.514972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.521711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.521789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.521805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.529182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.529249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.529264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.536343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.536400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.536416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.542875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.542927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.542942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.547857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.547967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.547982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.556402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.556504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.556521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.561666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.561720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.561736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.567124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.270 [2024-12-05 21:23:49.567182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.270 [2024-12-05 21:23:49.567197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.270 [2024-12-05 21:23:49.575750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.575822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.575838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.581894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.582143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.582158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.588173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.588250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.588266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.593139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.593202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.593218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.597293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.597380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.597396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.604573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.604631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.604647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.610384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.610465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.610480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.615457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.615532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.615547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.620737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.620817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.620832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.628669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.628944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.628960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.635455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.635696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.635710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.641219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.641538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.641555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.650451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.650513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.650528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.656505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.656561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.656577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.665497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.665564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.665582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.673253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.673565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.673582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.678845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.678944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.678959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.684655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.684758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.684773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.691174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.691266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.691281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.697333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.697397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.697412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.271 [2024-12-05 21:23:49.702885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.271 [2024-12-05 21:23:49.702952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.271 [2024-12-05 21:23:49.702967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.709340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.709431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.709447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.717244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.717315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.717330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.724089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.724179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.724194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.729825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.729960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.729975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.737718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.737972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.737987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.743605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.743668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.743683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.748847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.748925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.748940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.756311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.756376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.756391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.763163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.763405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.763420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.770806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.770873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.770888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.779413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.779669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.779684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.789096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.789160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.789177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.799330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.799624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.799640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.810504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.810784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.810800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.822352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.822679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.822694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.834349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.834416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.834432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.846307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.846595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.846612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.857765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.858089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.858105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.869365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.869691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.869707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.881537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.881868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.881887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.532 [2024-12-05 21:23:49.892096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.532 [2024-12-05 21:23:49.892156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.532 [2024-12-05 21:23:49.892171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.533 [2024-12-05 21:23:49.902920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.533 [2024-12-05 21:23:49.903191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.533 [2024-12-05 21:23:49.903206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.533 [2024-12-05 21:23:49.912892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.533 [2024-12-05 21:23:49.913177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.533 [2024-12-05 21:23:49.913193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.533 [2024-12-05 21:23:49.922646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.533 [2024-12-05 21:23:49.922704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.533 [2024-12-05 21:23:49.922720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.533 [2024-12-05 21:23:49.932315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.533 [2024-12-05 21:23:49.932394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.533 [2024-12-05 21:23:49.932409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.533 [2024-12-05 21:23:49.939992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.533 [2024-12-05 21:23:49.940051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.533 [2024-12-05 21:23:49.940066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.533 [2024-12-05 21:23:49.947736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.533 [2024-12-05 21:23:49.948018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.533 [2024-12-05 21:23:49.948035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.533 [2024-12-05 21:23:49.957603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.533 [2024-12-05 21:23:49.957674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.533 [2024-12-05 21:23:49.957690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.533 [2024-12-05 21:23:49.965558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.533 [2024-12-05 21:23:49.965624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.533 [2024-12-05 21:23:49.965640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:49.973679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.795 [2024-12-05 21:23:49.973747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.795 [2024-12-05 21:23:49.973763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:49.978671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.795 [2024-12-05 21:23:49.978728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.795 [2024-12-05 21:23:49.978742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:49.984027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.795 [2024-12-05 21:23:49.984349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.795 [2024-12-05 21:23:49.984364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:49.989147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.795 [2024-12-05 21:23:49.989209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.795 [2024-12-05 21:23:49.989224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:49.997364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.795 [2024-12-05 21:23:49.997468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.795 [2024-12-05 21:23:49.997484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:50.003989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.795 [2024-12-05 21:23:50.004077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.795 [2024-12-05 21:23:50.004093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:50.007965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.795 [2024-12-05 21:23:50.008064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.795 [2024-12-05 21:23:50.008079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:50.013736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.795 [2024-12-05 21:23:50.013797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.795 [2024-12-05 21:23:50.013813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:50.018879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.795 [2024-12-05 21:23:50.018976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.795 [2024-12-05 21:23:50.018996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:50.025725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.795 [2024-12-05 21:23:50.025781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.795 [2024-12-05 21:23:50.025797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:50.030627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.795 [2024-12-05 21:23:50.030682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.795 [2024-12-05 21:23:50.030697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:50.036550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.795 [2024-12-05 21:23:50.036846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.795 [2024-12-05 21:23:50.036866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:50.042855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.795 [2024-12-05 21:23:50.042925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.795 [2024-12-05 21:23:50.042940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:50.047190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.795 [2024-12-05 21:23:50.047277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.795 [2024-12-05 21:23:50.047292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:50.051272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.795 [2024-12-05 21:23:50.051335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.795 [2024-12-05 21:23:50.051351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.795 [2024-12-05 21:23:50.056236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.056323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.056338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.060749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.060807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.060829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.066804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.066882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.066897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.072180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.072259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.072274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.077965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.078026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.078041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.084591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.084664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.084679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.091482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.091560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.091575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.098791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.098857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.098878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.104114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.104254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.104270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.113795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.113906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.113921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.123950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.124319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.124335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.135456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.135723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.135739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.146715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.146990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.147006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.157232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.157545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.157561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.168444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.168738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.168754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.176093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.176170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.176185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.183437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.183491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.183507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.190099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.190404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.190420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.199828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.199906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.199921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.207029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.207104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.207119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.212936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.213037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.213052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:48.796 [2024-12-05 21:23:50.220581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:48.796 [2024-12-05 21:23:50.220655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.796 [2024-12-05 21:23:50.220670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.229707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.229772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.229788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.238646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.238724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.238740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.246752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.247045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.247061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.255819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.255887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.255903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.263845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.263912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.263928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.272307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.272391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.272409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.075 4192.00 IOPS, 524.00 MiB/s [2024-12-05T20:23:50.512Z] [2024-12-05 21:23:50.281252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.281314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.281330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.289957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.290017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.290033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.297760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.297836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.297851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.305988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.306052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.306067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.312914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.312979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.312994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.321176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.321239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.321254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.326905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.327155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.327171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.334245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.334319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.334335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.338880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.338959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.338974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.343808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.343887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.343903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.075 [2024-12-05 21:23:50.347952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.075 [2024-12-05 21:23:50.348009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.075 [2024-12-05 21:23:50.348025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.352302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.352606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.352622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.359270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.359351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.359366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.364915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.364977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.364992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.371277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.371593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.371609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.377127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.377255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.377270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.384338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.384649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.384665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.392812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.392893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.392908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.399840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.399921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.399937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.408933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.409255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.409271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.418836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.419118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.419134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.427202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.427271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.427286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.435891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.436155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.436170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.443419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.443688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.443704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.451916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.451979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.451994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.459100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.459158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.459176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.464289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.464391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.464406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.469323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.469384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.469399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.475790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.476031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.476047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.480956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.481053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.481069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.486857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.487117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.487133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.493434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.493500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.493515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.498732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.498788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.498804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.076 [2024-12-05 21:23:50.505434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.076 [2024-12-05 21:23:50.505499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.076 [2024-12-05 21:23:50.505514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.337 [2024-12-05 21:23:50.513528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.337 [2024-12-05 21:23:50.513596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.337 [2024-12-05 21:23:50.513612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.337 [2024-12-05 21:23:50.518090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.337 [2024-12-05 21:23:50.518171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.518186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.525353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.525461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.525476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.531754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.531852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.531874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.536826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.537134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.537150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.543448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.543510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.543525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.549976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.550314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.550330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.558246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.558336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.558352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.563721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.563784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.563799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.569289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.569574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.569590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.574214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.574282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.574298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.580152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.580210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.580226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.585510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.585774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.585790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.592750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.592827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.592841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.597662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.597731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.597747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.604964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.605025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.605041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.610012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.610081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.610096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.614167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.614263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.614281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.622020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.622086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.622102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.630518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.630583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.630598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.638853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.638930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.638945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.647207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.647259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.647274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.654355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.654427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.654443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.662276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.662340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.662356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.671448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.671507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.671523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.681934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.682041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.682058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.689048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.689120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.689136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.696550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.696605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.338 [2024-12-05 21:23:50.696620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.338 [2024-12-05 21:23:50.705370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.338 [2024-12-05 21:23:50.705434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.339 [2024-12-05 21:23:50.705449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.339 [2024-12-05 21:23:50.713769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.339 [2024-12-05 21:23:50.713829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.339 [2024-12-05 21:23:50.713844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.339 [2024-12-05 21:23:50.720733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.339 [2024-12-05 21:23:50.720786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.339 [2024-12-05 21:23:50.720801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.339 [2024-12-05 21:23:50.728036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.339 [2024-12-05 21:23:50.728128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.339 [2024-12-05 21:23:50.728143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.339 [2024-12-05 21:23:50.734958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.339 [2024-12-05 21:23:50.735025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.339 [2024-12-05 21:23:50.735040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.339 [2024-12-05 21:23:50.741291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.339 [2024-12-05 21:23:50.741390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.339 [2024-12-05 21:23:50.741405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.339 [2024-12-05 21:23:50.749500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.339 [2024-12-05 21:23:50.749569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.339 [2024-12-05 21:23:50.749584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.339 [2024-12-05 21:23:50.757697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.339 [2024-12-05 21:23:50.757762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.339 [2024-12-05 21:23:50.757777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.339 [2024-12-05 21:23:50.766440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.339 [2024-12-05 21:23:50.766504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.339 [2024-12-05 21:23:50.766519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.600 [2024-12-05 21:23:50.773788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.600 [2024-12-05 21:23:50.774139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.600 [2024-12-05 21:23:50.774155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.600 [2024-12-05 21:23:50.781200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.600 [2024-12-05 21:23:50.781396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.600 [2024-12-05 21:23:50.781412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.600 [2024-12-05 21:23:50.787174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.600 [2024-12-05 21:23:50.787361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.600 [2024-12-05 21:23:50.787378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.600 [2024-12-05 21:23:50.792559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.600 [2024-12-05 21:23:50.792752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.600 [2024-12-05 21:23:50.792769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.600 [2024-12-05 21:23:50.798733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.600 [2024-12-05 21:23:50.799028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.600 [2024-12-05 21:23:50.799044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.600 [2024-12-05 21:23:50.804459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.600 [2024-12-05 21:23:50.804607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.600 [2024-12-05 21:23:50.804622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.600 [2024-12-05 21:23:50.810635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.600 [2024-12-05 21:23:50.810948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.600 [2024-12-05 21:23:50.810968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.600 [2024-12-05 21:23:50.817064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.600 [2024-12-05 21:23:50.817336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.600 [2024-12-05 21:23:50.817351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.600 [2024-12-05 21:23:50.822229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.600 [2024-12-05 21:23:50.822431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.600 [2024-12-05 21:23:50.822447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.600 [2024-12-05 21:23:50.827875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.828173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.828189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.832630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.832852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.832873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.839673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.839939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.839956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.844017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.844193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.844208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.850320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.850496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.850512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.855186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.855362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.855378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.860158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.860374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.860390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.865567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.865742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.865758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.872002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.872286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.872302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.876614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.876879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.876896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.882947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.883124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.883140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.890111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.890286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.890302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.896569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.896888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.896904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.903019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.903199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.903215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.908466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.908644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.908660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.915221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.915398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.915414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.920656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.920966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.920982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.926163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.926349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.926365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.932022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.932198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.932215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.938583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.938909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.938925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.947565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.947831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.947847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.953481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.953661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.953678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.957894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.958061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.958078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.962894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.963067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.963086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.967935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.968076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.968091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.973670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.973851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.973872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.978905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.979086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.601 [2024-12-05 21:23:50.979102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.601 [2024-12-05 21:23:50.983298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.601 [2024-12-05 21:23:50.983477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.602 [2024-12-05 21:23:50.983493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.602 [2024-12-05 21:23:50.987293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.602 [2024-12-05 21:23:50.987471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.602 [2024-12-05 21:23:50.987487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.602 [2024-12-05 21:23:50.993771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.602 [2024-12-05 21:23:50.993956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.602 [2024-12-05 21:23:50.993972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.602 [2024-12-05 21:23:50.999194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.602 [2024-12-05 21:23:50.999409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.602 [2024-12-05 21:23:50.999425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.602 [2024-12-05 21:23:51.007447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.602 [2024-12-05 21:23:51.007680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.602 [2024-12-05 21:23:51.007696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.602 [2024-12-05 21:23:51.014505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.602 [2024-12-05 21:23:51.014746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.602 [2024-12-05 21:23:51.014762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.602 [2024-12-05 21:23:51.021499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.602 [2024-12-05 21:23:51.021812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.602 [2024-12-05 21:23:51.021828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.602 [2024-12-05 21:23:51.028361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.602 [2024-12-05 21:23:51.028440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.602 [2024-12-05 21:23:51.028456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.602 [2024-12-05 21:23:51.033902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.602 [2024-12-05 21:23:51.034209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.602 [2024-12-05 21:23:51.034225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.863 [2024-12-05 21:23:51.039550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.863 [2024-12-05 21:23:51.039829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.863 [2024-12-05 21:23:51.039846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.863 [2024-12-05 21:23:51.048293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.863 [2024-12-05 21:23:51.048596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.863 [2024-12-05 21:23:51.048613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.863 [2024-12-05 21:23:51.053593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.863 [2024-12-05 21:23:51.053757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.863 [2024-12-05 21:23:51.053773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.863 [2024-12-05 21:23:51.059456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.059774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.059791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.063834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.064108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.064124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.068398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.068738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.068755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.072843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.073028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.073044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.080064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.080243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.080259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.088594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.088867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.088883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.099303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.099504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.099520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.109718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.109958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.109974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.118762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.119064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.119081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.125698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.125880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.125896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.130376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.130554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.130573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.135649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.135828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.135845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.141405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.141581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.141598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.146650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.146832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.146849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.153781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.154015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.154031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.159309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.159488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.159504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.166439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.166703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.166719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.172413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.172621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.172638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.864 [2024-12-05 21:23:51.177694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.864 [2024-12-05 21:23:51.177992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.864 [2024-12-05 21:23:51.178008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.184792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.185073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.185090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.191675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.191850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.191872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.197202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.197376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.197392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.203279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.203512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.203528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.208220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.208450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.208466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.212962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.213137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.213152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.218769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.219057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.219073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.225640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.225922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.225938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.233192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.233525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.233541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.237592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.237772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.237789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.243420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.243599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.243615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.247955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.248133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.248149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.255529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.255798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.255815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.260310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.260470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.260487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.265055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.265204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.265220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.269730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.269900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.269917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:49.865 [2024-12-05 21:23:51.274974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af1d10) with pdu=0x200016eff3c8 00:30:49.865 [2024-12-05 21:23:51.275131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:49.865 [2024-12-05 21:23:51.275146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:49.865 4500.00 IOPS, 562.50 MiB/s 00:30:49.865 Latency(us) 00:30:49.865 [2024-12-05T20:23:51.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.865 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:49.865 nvme0n1 : 2.00 4499.09 562.39 0.00 0.00 3551.13 1843.20 12397.23 00:30:49.865 [2024-12-05T20:23:51.303Z] =================================================================================================================== 00:30:49.866 [2024-12-05T20:23:51.303Z] Total : 4499.09 562.39 0.00 0.00 3551.13 1843.20 12397.23 00:30:49.866 { 00:30:49.866 "results": [ 00:30:49.866 { 00:30:49.866 "job": "nvme0n1", 00:30:49.866 "core_mask": "0x2", 00:30:49.866 "workload": "randwrite", 00:30:49.866 "status": "finished", 00:30:49.866 "queue_depth": 16, 00:30:49.866 "io_size": 131072, 00:30:49.866 "runtime": 2.003962, 00:30:49.866 "iops": 4499.087308042767, 00:30:49.866 "mibps": 562.3859135053459, 00:30:49.866 "io_failed": 0, 00:30:49.866 "io_timeout": 0, 00:30:49.866 "avg_latency_us": 3551.127358769595, 00:30:49.866 "min_latency_us": 1843.2, 00:30:49.866 "max_latency_us": 12397.226666666667 00:30:49.866 } 00:30:49.866 ], 00:30:49.866 "core_count": 1 00:30:49.866 } 00:30:50.126 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:50.126 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:50.126 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:50.126 | .driver_specific 00:30:50.126 | .nvme_error 00:30:50.126 | .status_code 00:30:50.126 | .command_transient_transport_error' 00:30:50.126 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:50.126 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 291 > 0 )) 00:30:50.126 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2289067 00:30:50.126 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2289067 ']' 00:30:50.126 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2289067 00:30:50.126 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:50.127 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.127 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2289067 00:30:50.127 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:50.127 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:50.127 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2289067' 00:30:50.127 killing process with pid 2289067 00:30:50.127 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2289067 00:30:50.127 Received shutdown signal, test time was about 2.000000 seconds 00:30:50.127 00:30:50.127 Latency(us) 00:30:50.127 [2024-12-05T20:23:51.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.127 [2024-12-05T20:23:51.564Z] =================================================================================================================== 00:30:50.127 [2024-12-05T20:23:51.564Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:50.127 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2289067 00:30:50.387 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2286779 00:30:50.387 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2286779 ']' 00:30:50.387 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2286779 00:30:50.387 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:50.387 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.387 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2286779 00:30:50.387 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:50.387 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:50.387 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2286779' 00:30:50.387 killing process with pid 2286779 00:30:50.387 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2286779 00:30:50.387 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2286779 00:30:50.649 00:30:50.649 real 0m15.691s 00:30:50.649 user 0m31.568s 00:30:50.649 sys 0m3.492s 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:50.649 ************************************ 00:30:50.649 END TEST nvmf_digest_error 00:30:50.649 ************************************ 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:50.649 rmmod nvme_tcp 00:30:50.649 rmmod nvme_fabrics 00:30:50.649 rmmod nvme_keyring 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2286779 ']' 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2286779 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2286779 ']' 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2286779 00:30:50.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2286779) - No such process 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2286779 is not found' 00:30:50.649 Process with pid 2286779 is not found 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.649 21:23:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:53.189 00:30:53.189 real 0m43.541s 00:30:53.189 user 1m7.259s 00:30:53.189 sys 0m13.600s 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:53.189 ************************************ 00:30:53.189 END TEST nvmf_digest 00:30:53.189 ************************************ 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.189 ************************************ 00:30:53.189 START TEST nvmf_bdevperf 00:30:53.189 ************************************ 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:53.189 * Looking for test storage... 00:30:53.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:53.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.189 --rc genhtml_branch_coverage=1 00:30:53.189 --rc genhtml_function_coverage=1 00:30:53.189 --rc genhtml_legend=1 00:30:53.189 --rc geninfo_all_blocks=1 00:30:53.189 --rc geninfo_unexecuted_blocks=1 00:30:53.189 00:30:53.189 ' 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:53.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.189 --rc genhtml_branch_coverage=1 00:30:53.189 --rc genhtml_function_coverage=1 00:30:53.189 --rc genhtml_legend=1 00:30:53.189 --rc geninfo_all_blocks=1 00:30:53.189 --rc geninfo_unexecuted_blocks=1 00:30:53.189 00:30:53.189 ' 00:30:53.189 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:53.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.189 --rc genhtml_branch_coverage=1 00:30:53.189 --rc genhtml_function_coverage=1 00:30:53.189 --rc genhtml_legend=1 00:30:53.190 --rc geninfo_all_blocks=1 00:30:53.190 --rc geninfo_unexecuted_blocks=1 00:30:53.190 00:30:53.190 ' 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:53.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:53.190 --rc genhtml_branch_coverage=1 00:30:53.190 --rc genhtml_function_coverage=1 00:30:53.190 --rc genhtml_legend=1 00:30:53.190 --rc geninfo_all_blocks=1 00:30:53.190 --rc geninfo_unexecuted_blocks=1 00:30:53.190 00:30:53.190 ' 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:53.190 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:53.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:53.191 21:23:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:01.338 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:01.339 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:01.339 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:01.339 Found net devices under 0000:31:00.0: cvl_0_0 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:01.339 Found net devices under 0000:31:00.1: cvl_0_1 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:01.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.719 ms 00:31:01.339 00:31:01.339 --- 10.0.0.2 ping statistics --- 00:31:01.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.339 rtt min/avg/max/mdev = 0.719/0.719/0.719/0.000 ms 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:31:01.339 00:31:01.339 --- 10.0.0.1 ping statistics --- 00:31:01.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.339 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:01.339 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:01.601 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:31:01.601 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:01.601 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:01.601 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:01.601 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:01.601 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2294721 00:31:01.601 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2294721 00:31:01.601 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2294721 ']' 00:31:01.601 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.601 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.601 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.601 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.601 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:01.601 21:24:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:01.601 [2024-12-05 21:24:02.849242] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:31:01.601 [2024-12-05 21:24:02.849307] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.601 [2024-12-05 21:24:02.959240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:01.601 [2024-12-05 21:24:03.012322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.601 [2024-12-05 21:24:03.012372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.601 [2024-12-05 21:24:03.012381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:01.601 [2024-12-05 21:24:03.012389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:01.601 [2024-12-05 21:24:03.012395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.601 [2024-12-05 21:24:03.014270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:01.601 [2024-12-05 21:24:03.014442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.601 [2024-12-05 21:24:03.014442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:02.542 [2024-12-05 21:24:03.714748] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:02.542 Malloc0 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:02.542 [2024-12-05 21:24:03.785975] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:02.542 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:02.542 { 00:31:02.542 "params": { 00:31:02.542 "name": "Nvme$subsystem", 00:31:02.542 "trtype": "$TEST_TRANSPORT", 00:31:02.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:02.542 "adrfam": "ipv4", 00:31:02.542 "trsvcid": "$NVMF_PORT", 00:31:02.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:02.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:02.542 "hdgst": ${hdgst:-false}, 00:31:02.543 "ddgst": ${ddgst:-false} 00:31:02.543 }, 00:31:02.543 "method": "bdev_nvme_attach_controller" 00:31:02.543 } 00:31:02.543 EOF 00:31:02.543 )") 00:31:02.543 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:02.543 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:02.543 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:02.543 21:24:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:02.543 "params": { 00:31:02.543 "name": "Nvme1", 00:31:02.543 "trtype": "tcp", 00:31:02.543 "traddr": "10.0.0.2", 00:31:02.543 "adrfam": "ipv4", 00:31:02.543 "trsvcid": "4420", 00:31:02.543 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:02.543 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:02.543 "hdgst": false, 00:31:02.543 "ddgst": false 00:31:02.543 }, 00:31:02.543 "method": "bdev_nvme_attach_controller" 00:31:02.543 }' 00:31:02.543 [2024-12-05 21:24:03.840799] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:31:02.543 [2024-12-05 21:24:03.840848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2294904 ] 00:31:02.543 [2024-12-05 21:24:03.916888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.543 [2024-12-05 21:24:03.953090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.803 Running I/O for 1 seconds... 00:31:03.744 8889.00 IOPS, 34.72 MiB/s 00:31:03.744 Latency(us) 00:31:03.744 [2024-12-05T20:24:05.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.744 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:03.744 Verification LBA range: start 0x0 length 0x4000 00:31:03.744 Nvme1n1 : 1.01 8943.19 34.93 0.00 0.00 14253.06 1378.99 16056.32 00:31:03.744 [2024-12-05T20:24:05.181Z] =================================================================================================================== 00:31:03.744 [2024-12-05T20:24:05.181Z] Total : 8943.19 34.93 0.00 0.00 14253.06 1378.99 16056.32 00:31:04.004 21:24:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2295235 00:31:04.004 21:24:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:31:04.004 21:24:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:04.004 21:24:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:04.004 21:24:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:04.004 21:24:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:04.004 21:24:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:04.004 21:24:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:04.004 { 00:31:04.004 "params": { 00:31:04.004 "name": "Nvme$subsystem", 00:31:04.004 "trtype": "$TEST_TRANSPORT", 00:31:04.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:04.004 "adrfam": "ipv4", 00:31:04.004 "trsvcid": "$NVMF_PORT", 00:31:04.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:04.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:04.004 "hdgst": ${hdgst:-false}, 00:31:04.004 "ddgst": ${ddgst:-false} 00:31:04.004 }, 00:31:04.004 "method": "bdev_nvme_attach_controller" 00:31:04.004 } 00:31:04.004 EOF 00:31:04.004 )") 00:31:04.004 21:24:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:04.004 21:24:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:04.004 21:24:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:04.004 21:24:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:04.004 "params": { 00:31:04.004 "name": "Nvme1", 00:31:04.004 "trtype": "tcp", 00:31:04.004 "traddr": "10.0.0.2", 00:31:04.004 "adrfam": "ipv4", 00:31:04.004 "trsvcid": "4420", 00:31:04.004 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:04.004 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:04.004 "hdgst": false, 00:31:04.004 "ddgst": false 00:31:04.004 }, 00:31:04.004 "method": "bdev_nvme_attach_controller" 00:31:04.004 }' 00:31:04.004 [2024-12-05 21:24:05.272942] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:31:04.004 [2024-12-05 21:24:05.272997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2295235 ] 00:31:04.004 [2024-12-05 21:24:05.350960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.004 [2024-12-05 21:24:05.386608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.297 Running I/O for 15 seconds... 00:31:06.623 11037.00 IOPS, 43.11 MiB/s [2024-12-05T20:24:08.324Z] 11115.50 IOPS, 43.42 MiB/s [2024-12-05T20:24:08.324Z] 21:24:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2294721 00:31:06.887 21:24:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:31:06.887 [2024-12-05 21:24:08.239626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.887 [2024-12-05 21:24:08.239671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.239692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.887 [2024-12-05 21:24:08.239702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.239715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.887 [2024-12-05 21:24:08.239724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.239736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.887 [2024-12-05 21:24:08.239751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.239761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.887 [2024-12-05 21:24:08.239771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.239782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.887 [2024-12-05 21:24:08.239790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.239800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.887 [2024-12-05 21:24:08.239808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.239819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.887 [2024-12-05 21:24:08.239828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.239839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.239848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.239858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.239969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.239982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.239993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.240005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.240014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.240026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.240035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.240047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.240057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.240069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.240079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.240092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.240101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.240116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.240123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.240134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.240143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.240153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.240161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.240171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.240178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.240188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.240195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.240205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.240212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.240222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.240230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.240240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.887 [2024-12-05 21:24:08.240247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.887 [2024-12-05 21:24:08.240256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.888 [2024-12-05 21:24:08.240382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.888 [2024-12-05 21:24:08.240676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.888 [2024-12-05 21:24:08.240693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.888 [2024-12-05 21:24:08.240709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.888 [2024-12-05 21:24:08.240726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.888 [2024-12-05 21:24:08.240743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.888 [2024-12-05 21:24:08.240761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.888 [2024-12-05 21:24:08.240778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.888 [2024-12-05 21:24:08.240795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.888 [2024-12-05 21:24:08.240811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.888 [2024-12-05 21:24:08.240828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.888 [2024-12-05 21:24:08.240845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.888 [2024-12-05 21:24:08.240865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.888 [2024-12-05 21:24:08.240882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.888 [2024-12-05 21:24:08.240899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.888 [2024-12-05 21:24:08.240916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.888 [2024-12-05 21:24:08.240925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.240932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.240941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.240948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.240959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.240966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.240977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.240985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.240994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.889 [2024-12-05 21:24:08.241472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.889 [2024-12-05 21:24:08.241489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.889 [2024-12-05 21:24:08.241507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.889 [2024-12-05 21:24:08.241524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.889 [2024-12-05 21:24:08.241541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.889 [2024-12-05 21:24:08.241558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.889 [2024-12-05 21:24:08.241575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.889 [2024-12-05 21:24:08.241584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.889 [2024-12-05 21:24:08.241592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:06.890 [2024-12-05 21:24:08.241748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.241989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.241998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:06.890 [2024-12-05 21:24:08.242006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.242014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecf660 is same with the state(6) to be set 00:31:06.890 [2024-12-05 21:24:08.242024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:06.890 [2024-12-05 21:24:08.242029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:06.890 [2024-12-05 21:24:08.242036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102744 len:8 PRP1 0x0 PRP2 0x0 00:31:06.890 [2024-12-05 21:24:08.242044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.890 [2024-12-05 21:24:08.245694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.890 [2024-12-05 21:24:08.245751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:06.890 [2024-12-05 21:24:08.246510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.890 [2024-12-05 21:24:08.246527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:06.890 [2024-12-05 21:24:08.246536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:06.890 [2024-12-05 21:24:08.246756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:06.890 [2024-12-05 21:24:08.246981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.890 [2024-12-05 21:24:08.246990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.890 [2024-12-05 21:24:08.246998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.890 [2024-12-05 21:24:08.247007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.890 [2024-12-05 21:24:08.259904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.890 [2024-12-05 21:24:08.260554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.890 [2024-12-05 21:24:08.260592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:06.890 [2024-12-05 21:24:08.260604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:06.890 [2024-12-05 21:24:08.260843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:06.890 [2024-12-05 21:24:08.261075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.890 [2024-12-05 21:24:08.261085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.890 [2024-12-05 21:24:08.261093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.890 [2024-12-05 21:24:08.261101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.890 [2024-12-05 21:24:08.273816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.890 [2024-12-05 21:24:08.274408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.890 [2024-12-05 21:24:08.274447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:06.890 [2024-12-05 21:24:08.274458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:06.890 [2024-12-05 21:24:08.274696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:06.890 [2024-12-05 21:24:08.274928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.890 [2024-12-05 21:24:08.274938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.890 [2024-12-05 21:24:08.274946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.890 [2024-12-05 21:24:08.274954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.890 [2024-12-05 21:24:08.287668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.890 [2024-12-05 21:24:08.288350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.890 [2024-12-05 21:24:08.288389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:06.891 [2024-12-05 21:24:08.288400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:06.891 [2024-12-05 21:24:08.288638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:06.891 [2024-12-05 21:24:08.288860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.891 [2024-12-05 21:24:08.288879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.891 [2024-12-05 21:24:08.288887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.891 [2024-12-05 21:24:08.288894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.891 [2024-12-05 21:24:08.301595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.891 [2024-12-05 21:24:08.302257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.891 [2024-12-05 21:24:08.302296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:06.891 [2024-12-05 21:24:08.302307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:06.891 [2024-12-05 21:24:08.302545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:06.891 [2024-12-05 21:24:08.302768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.891 [2024-12-05 21:24:08.302777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.891 [2024-12-05 21:24:08.302786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.891 [2024-12-05 21:24:08.302794] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:06.891 [2024-12-05 21:24:08.315513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:06.891 [2024-12-05 21:24:08.316166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.891 [2024-12-05 21:24:08.316205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:06.891 [2024-12-05 21:24:08.316216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:06.891 [2024-12-05 21:24:08.316455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:06.891 [2024-12-05 21:24:08.316677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:06.891 [2024-12-05 21:24:08.316686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:06.891 [2024-12-05 21:24:08.316694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:06.891 [2024-12-05 21:24:08.316702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.153 [2024-12-05 21:24:08.329395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.153 [2024-12-05 21:24:08.330150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.153 [2024-12-05 21:24:08.330188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.153 [2024-12-05 21:24:08.330199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.153 [2024-12-05 21:24:08.330441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.153 [2024-12-05 21:24:08.330663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.153 [2024-12-05 21:24:08.330672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.153 [2024-12-05 21:24:08.330680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.153 [2024-12-05 21:24:08.330688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.153 [2024-12-05 21:24:08.343182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.153 [2024-12-05 21:24:08.343732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.153 [2024-12-05 21:24:08.343752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.153 [2024-12-05 21:24:08.343760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.154 [2024-12-05 21:24:08.343985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.154 [2024-12-05 21:24:08.344203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.154 [2024-12-05 21:24:08.344212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.154 [2024-12-05 21:24:08.344219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.154 [2024-12-05 21:24:08.344226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.154 [2024-12-05 21:24:08.357120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.154 [2024-12-05 21:24:08.357780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.154 [2024-12-05 21:24:08.357819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.154 [2024-12-05 21:24:08.357831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.154 [2024-12-05 21:24:08.358081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.154 [2024-12-05 21:24:08.358304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.154 [2024-12-05 21:24:08.358313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.154 [2024-12-05 21:24:08.358322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.154 [2024-12-05 21:24:08.358329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.154 [2024-12-05 21:24:08.371043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.154 [2024-12-05 21:24:08.371599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.154 [2024-12-05 21:24:08.371619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.154 [2024-12-05 21:24:08.371627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.154 [2024-12-05 21:24:08.371845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.154 [2024-12-05 21:24:08.372070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.154 [2024-12-05 21:24:08.372084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.154 [2024-12-05 21:24:08.372092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.154 [2024-12-05 21:24:08.372098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.154 [2024-12-05 21:24:08.385010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.154 [2024-12-05 21:24:08.385645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.154 [2024-12-05 21:24:08.385683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.154 [2024-12-05 21:24:08.385694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.154 [2024-12-05 21:24:08.385941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.154 [2024-12-05 21:24:08.386164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.154 [2024-12-05 21:24:08.386174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.154 [2024-12-05 21:24:08.386182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.154 [2024-12-05 21:24:08.386191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.154 [2024-12-05 21:24:08.398912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.154 [2024-12-05 21:24:08.399459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.154 [2024-12-05 21:24:08.399478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.154 [2024-12-05 21:24:08.399486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.154 [2024-12-05 21:24:08.399705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.154 [2024-12-05 21:24:08.399932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.154 [2024-12-05 21:24:08.399940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.154 [2024-12-05 21:24:08.399947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.154 [2024-12-05 21:24:08.399954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.154 [2024-12-05 21:24:08.412851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.154 [2024-12-05 21:24:08.413296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.154 [2024-12-05 21:24:08.413313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.154 [2024-12-05 21:24:08.413321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.154 [2024-12-05 21:24:08.413538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.154 [2024-12-05 21:24:08.413756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.154 [2024-12-05 21:24:08.413764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.154 [2024-12-05 21:24:08.413771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.154 [2024-12-05 21:24:08.413782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.154 [2024-12-05 21:24:08.426694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.154 [2024-12-05 21:24:08.427282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.154 [2024-12-05 21:24:08.427300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.154 [2024-12-05 21:24:08.427307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.154 [2024-12-05 21:24:08.427525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.154 [2024-12-05 21:24:08.427742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.154 [2024-12-05 21:24:08.427750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.154 [2024-12-05 21:24:08.427757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.154 [2024-12-05 21:24:08.427764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.154 [2024-12-05 21:24:08.440666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.154 [2024-12-05 21:24:08.441264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.154 [2024-12-05 21:24:08.441281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.154 [2024-12-05 21:24:08.441288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.154 [2024-12-05 21:24:08.441506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.154 [2024-12-05 21:24:08.441723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.154 [2024-12-05 21:24:08.441732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.154 [2024-12-05 21:24:08.441739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.154 [2024-12-05 21:24:08.441745] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.154 [2024-12-05 21:24:08.454438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.154 [2024-12-05 21:24:08.454943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.154 [2024-12-05 21:24:08.454960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.154 [2024-12-05 21:24:08.454968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.154 [2024-12-05 21:24:08.455185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.154 [2024-12-05 21:24:08.455402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.154 [2024-12-05 21:24:08.455411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.154 [2024-12-05 21:24:08.455418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.154 [2024-12-05 21:24:08.455425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.154 [2024-12-05 21:24:08.468320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.154 [2024-12-05 21:24:08.468760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.154 [2024-12-05 21:24:08.468775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.154 [2024-12-05 21:24:08.468782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.154 [2024-12-05 21:24:08.469005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.154 [2024-12-05 21:24:08.469223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.154 [2024-12-05 21:24:08.469231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.154 [2024-12-05 21:24:08.469239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.154 [2024-12-05 21:24:08.469245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.154 [2024-12-05 21:24:08.482161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.154 [2024-12-05 21:24:08.482639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.155 [2024-12-05 21:24:08.482655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.155 [2024-12-05 21:24:08.482662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.155 [2024-12-05 21:24:08.482886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.155 [2024-12-05 21:24:08.483104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.155 [2024-12-05 21:24:08.483113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.155 [2024-12-05 21:24:08.483120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.155 [2024-12-05 21:24:08.483126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.155 [2024-12-05 21:24:08.496015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.155 [2024-12-05 21:24:08.496588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.155 [2024-12-05 21:24:08.496627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.155 [2024-12-05 21:24:08.496638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.155 [2024-12-05 21:24:08.496885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.155 [2024-12-05 21:24:08.497109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.155 [2024-12-05 21:24:08.497118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.155 [2024-12-05 21:24:08.497126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.155 [2024-12-05 21:24:08.497134] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.155 [2024-12-05 21:24:08.509815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.155 [2024-12-05 21:24:08.510496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.155 [2024-12-05 21:24:08.510534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.155 [2024-12-05 21:24:08.510546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.155 [2024-12-05 21:24:08.510788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.155 [2024-12-05 21:24:08.511021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.155 [2024-12-05 21:24:08.511031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.155 [2024-12-05 21:24:08.511039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.155 [2024-12-05 21:24:08.511046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.155 [2024-12-05 21:24:08.523735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.155 [2024-12-05 21:24:08.524446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.155 [2024-12-05 21:24:08.524484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.155 [2024-12-05 21:24:08.524495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.155 [2024-12-05 21:24:08.524733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.155 [2024-12-05 21:24:08.524964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.155 [2024-12-05 21:24:08.524975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.155 [2024-12-05 21:24:08.524984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.155 [2024-12-05 21:24:08.524992] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.155 [2024-12-05 21:24:08.537673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.155 [2024-12-05 21:24:08.538264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.155 [2024-12-05 21:24:08.538303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.155 [2024-12-05 21:24:08.538314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.155 [2024-12-05 21:24:08.538552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.155 [2024-12-05 21:24:08.538774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.155 [2024-12-05 21:24:08.538783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.155 [2024-12-05 21:24:08.538791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.155 [2024-12-05 21:24:08.538799] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.155 [2024-12-05 21:24:08.551487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.155 [2024-12-05 21:24:08.551981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.155 [2024-12-05 21:24:08.552001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.155 [2024-12-05 21:24:08.552009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.155 [2024-12-05 21:24:08.552228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.155 [2024-12-05 21:24:08.552605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.155 [2024-12-05 21:24:08.552622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.155 [2024-12-05 21:24:08.552629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.155 [2024-12-05 21:24:08.552636] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.155 [2024-12-05 21:24:08.565364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.155 [2024-12-05 21:24:08.566002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.155 [2024-12-05 21:24:08.566041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.155 [2024-12-05 21:24:08.566053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.155 [2024-12-05 21:24:08.566292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.155 [2024-12-05 21:24:08.566514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.155 [2024-12-05 21:24:08.566524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.155 [2024-12-05 21:24:08.566531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.155 [2024-12-05 21:24:08.566540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.155 [2024-12-05 21:24:08.579247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.155 [2024-12-05 21:24:08.579847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.155 [2024-12-05 21:24:08.579872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.155 [2024-12-05 21:24:08.579881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.155 [2024-12-05 21:24:08.580099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.155 [2024-12-05 21:24:08.580317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.155 [2024-12-05 21:24:08.580325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.155 [2024-12-05 21:24:08.580332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.155 [2024-12-05 21:24:08.580339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.418 [2024-12-05 21:24:08.593015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.418 [2024-12-05 21:24:08.593624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-12-05 21:24:08.593663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.418 [2024-12-05 21:24:08.593674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.418 [2024-12-05 21:24:08.593920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.418 [2024-12-05 21:24:08.594143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.418 [2024-12-05 21:24:08.594152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.418 [2024-12-05 21:24:08.594160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.418 [2024-12-05 21:24:08.594173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.418 [2024-12-05 21:24:08.606853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.418 [2024-12-05 21:24:08.607434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-12-05 21:24:08.607472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.418 [2024-12-05 21:24:08.607483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.418 [2024-12-05 21:24:08.607721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.418 [2024-12-05 21:24:08.607949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.418 [2024-12-05 21:24:08.607959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.418 [2024-12-05 21:24:08.607967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.418 [2024-12-05 21:24:08.607975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.418 [2024-12-05 21:24:08.620657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.418 [2024-12-05 21:24:08.621391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-12-05 21:24:08.621429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.418 [2024-12-05 21:24:08.621440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.418 [2024-12-05 21:24:08.621677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.418 [2024-12-05 21:24:08.621907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.418 [2024-12-05 21:24:08.621917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.418 [2024-12-05 21:24:08.621924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.418 [2024-12-05 21:24:08.621932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.418 [2024-12-05 21:24:08.634608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.418 [2024-12-05 21:24:08.635179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.418 [2024-12-05 21:24:08.635199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.418 [2024-12-05 21:24:08.635207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.418 [2024-12-05 21:24:08.635427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.418 [2024-12-05 21:24:08.635645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.418 [2024-12-05 21:24:08.635653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.418 [2024-12-05 21:24:08.635660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.418 [2024-12-05 21:24:08.635666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.419 [2024-12-05 21:24:08.648543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.419 [2024-12-05 21:24:08.649100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-12-05 21:24:08.649143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.419 [2024-12-05 21:24:08.649156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.419 [2024-12-05 21:24:08.649394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.419 [2024-12-05 21:24:08.649616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.419 [2024-12-05 21:24:08.649625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.419 [2024-12-05 21:24:08.649633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.419 [2024-12-05 21:24:08.649641] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.419 9719.00 IOPS, 37.96 MiB/s [2024-12-05T20:24:08.856Z] [2024-12-05 21:24:08.663988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.419 [2024-12-05 21:24:08.664610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-12-05 21:24:08.664648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.419 [2024-12-05 21:24:08.664659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.419 [2024-12-05 21:24:08.664904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.419 [2024-12-05 21:24:08.665127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.419 [2024-12-05 21:24:08.665136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.419 [2024-12-05 21:24:08.665143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.419 [2024-12-05 21:24:08.665151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.419 [2024-12-05 21:24:08.677909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.419 [2024-12-05 21:24:08.678465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-12-05 21:24:08.678484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.419 [2024-12-05 21:24:08.678492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.419 [2024-12-05 21:24:08.678710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.419 [2024-12-05 21:24:08.678933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.419 [2024-12-05 21:24:08.678943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.419 [2024-12-05 21:24:08.678950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.419 [2024-12-05 21:24:08.678957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.419 [2024-12-05 21:24:08.691839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.419 [2024-12-05 21:24:08.692485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-12-05 21:24:08.692524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.419 [2024-12-05 21:24:08.692535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.419 [2024-12-05 21:24:08.692777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.419 [2024-12-05 21:24:08.693007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.419 [2024-12-05 21:24:08.693017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.419 [2024-12-05 21:24:08.693025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.419 [2024-12-05 21:24:08.693033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.419 [2024-12-05 21:24:08.705712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.419 [2024-12-05 21:24:08.706390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-12-05 21:24:08.706428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.419 [2024-12-05 21:24:08.706439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.419 [2024-12-05 21:24:08.706676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.419 [2024-12-05 21:24:08.706907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.419 [2024-12-05 21:24:08.706917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.419 [2024-12-05 21:24:08.706925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.419 [2024-12-05 21:24:08.706932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.419 [2024-12-05 21:24:08.719604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.419 [2024-12-05 21:24:08.720159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-12-05 21:24:08.720178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.419 [2024-12-05 21:24:08.720186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.419 [2024-12-05 21:24:08.720405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.419 [2024-12-05 21:24:08.720622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.419 [2024-12-05 21:24:08.720631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.419 [2024-12-05 21:24:08.720638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.419 [2024-12-05 21:24:08.720645] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.419 [2024-12-05 21:24:08.733522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.419 [2024-12-05 21:24:08.734175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-12-05 21:24:08.734214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.419 [2024-12-05 21:24:08.734225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.419 [2024-12-05 21:24:08.734463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.419 [2024-12-05 21:24:08.734685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.419 [2024-12-05 21:24:08.734699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.419 [2024-12-05 21:24:08.734706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.419 [2024-12-05 21:24:08.734715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.419 [2024-12-05 21:24:08.747409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.419 [2024-12-05 21:24:08.747978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-12-05 21:24:08.748016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.419 [2024-12-05 21:24:08.748028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.419 [2024-12-05 21:24:08.748267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.419 [2024-12-05 21:24:08.748489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.419 [2024-12-05 21:24:08.748499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.419 [2024-12-05 21:24:08.748508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.419 [2024-12-05 21:24:08.748516] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.419 [2024-12-05 21:24:08.761203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.419 [2024-12-05 21:24:08.761879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-12-05 21:24:08.761917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.419 [2024-12-05 21:24:08.761928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.419 [2024-12-05 21:24:08.762165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.419 [2024-12-05 21:24:08.762387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.419 [2024-12-05 21:24:08.762395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.419 [2024-12-05 21:24:08.762403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.419 [2024-12-05 21:24:08.762411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.419 [2024-12-05 21:24:08.775103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.419 [2024-12-05 21:24:08.775635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.419 [2024-12-05 21:24:08.775655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.419 [2024-12-05 21:24:08.775663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.419 [2024-12-05 21:24:08.775889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.419 [2024-12-05 21:24:08.776108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.419 [2024-12-05 21:24:08.776116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.420 [2024-12-05 21:24:08.776123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.420 [2024-12-05 21:24:08.776135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.420 [2024-12-05 21:24:08.789025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.420 [2024-12-05 21:24:08.789690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-12-05 21:24:08.789728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.420 [2024-12-05 21:24:08.789739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.420 [2024-12-05 21:24:08.789985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.420 [2024-12-05 21:24:08.790207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.420 [2024-12-05 21:24:08.790216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.420 [2024-12-05 21:24:08.790224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.420 [2024-12-05 21:24:08.790232] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.420 [2024-12-05 21:24:08.802912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.420 [2024-12-05 21:24:08.803445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-12-05 21:24:08.803483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.420 [2024-12-05 21:24:08.803494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.420 [2024-12-05 21:24:08.803731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.420 [2024-12-05 21:24:08.803962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.420 [2024-12-05 21:24:08.803972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.420 [2024-12-05 21:24:08.803980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.420 [2024-12-05 21:24:08.803988] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.420 [2024-12-05 21:24:08.816877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.420 [2024-12-05 21:24:08.817456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-12-05 21:24:08.817476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.420 [2024-12-05 21:24:08.817484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.420 [2024-12-05 21:24:08.817703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.420 [2024-12-05 21:24:08.817926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.420 [2024-12-05 21:24:08.817944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.420 [2024-12-05 21:24:08.817951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.420 [2024-12-05 21:24:08.817958] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.420 [2024-12-05 21:24:08.830837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.420 [2024-12-05 21:24:08.831420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-12-05 21:24:08.831443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.420 [2024-12-05 21:24:08.831450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.420 [2024-12-05 21:24:08.831668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.420 [2024-12-05 21:24:08.831890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.420 [2024-12-05 21:24:08.831899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.420 [2024-12-05 21:24:08.831906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.420 [2024-12-05 21:24:08.831912] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.420 [2024-12-05 21:24:08.844791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.420 [2024-12-05 21:24:08.845361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.420 [2024-12-05 21:24:08.845399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.420 [2024-12-05 21:24:08.845410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.420 [2024-12-05 21:24:08.845648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.420 [2024-12-05 21:24:08.845878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.420 [2024-12-05 21:24:08.845887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.420 [2024-12-05 21:24:08.845895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.420 [2024-12-05 21:24:08.845903] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.683 [2024-12-05 21:24:08.858574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.683 [2024-12-05 21:24:08.859093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.683 [2024-12-05 21:24:08.859131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.683 [2024-12-05 21:24:08.859142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.683 [2024-12-05 21:24:08.859380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.683 [2024-12-05 21:24:08.859601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.683 [2024-12-05 21:24:08.859610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.683 [2024-12-05 21:24:08.859618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.683 [2024-12-05 21:24:08.859626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.683 [2024-12-05 21:24:08.872527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.683 [2024-12-05 21:24:08.873181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.683 [2024-12-05 21:24:08.873219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.683 [2024-12-05 21:24:08.873231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.683 [2024-12-05 21:24:08.873473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.683 [2024-12-05 21:24:08.873695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.683 [2024-12-05 21:24:08.873705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.683 [2024-12-05 21:24:08.873712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.683 [2024-12-05 21:24:08.873721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.683 [2024-12-05 21:24:08.886414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.683 [2024-12-05 21:24:08.887083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.683 [2024-12-05 21:24:08.887121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.683 [2024-12-05 21:24:08.887133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.683 [2024-12-05 21:24:08.887371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.684 [2024-12-05 21:24:08.887592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.684 [2024-12-05 21:24:08.887602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.684 [2024-12-05 21:24:08.887610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.684 [2024-12-05 21:24:08.887618] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.684 [2024-12-05 21:24:08.900301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.684 [2024-12-05 21:24:08.900985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.684 [2024-12-05 21:24:08.901023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.684 [2024-12-05 21:24:08.901034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.684 [2024-12-05 21:24:08.901272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.684 [2024-12-05 21:24:08.901494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.684 [2024-12-05 21:24:08.901503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.684 [2024-12-05 21:24:08.901511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.684 [2024-12-05 21:24:08.901519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.684 [2024-12-05 21:24:08.914204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.684 [2024-12-05 21:24:08.914839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.684 [2024-12-05 21:24:08.914886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.684 [2024-12-05 21:24:08.914899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.684 [2024-12-05 21:24:08.915137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.684 [2024-12-05 21:24:08.915359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.684 [2024-12-05 21:24:08.915376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.684 [2024-12-05 21:24:08.915384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.684 [2024-12-05 21:24:08.915392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.684 [2024-12-05 21:24:08.928073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.684 [2024-12-05 21:24:08.928753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.684 [2024-12-05 21:24:08.928791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.684 [2024-12-05 21:24:08.928802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.684 [2024-12-05 21:24:08.929047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.684 [2024-12-05 21:24:08.929270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.684 [2024-12-05 21:24:08.929279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.684 [2024-12-05 21:24:08.929286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.684 [2024-12-05 21:24:08.929294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.684 [2024-12-05 21:24:08.941980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.684 [2024-12-05 21:24:08.942633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.684 [2024-12-05 21:24:08.942671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.684 [2024-12-05 21:24:08.942683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.684 [2024-12-05 21:24:08.942928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.684 [2024-12-05 21:24:08.943151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.684 [2024-12-05 21:24:08.943160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.684 [2024-12-05 21:24:08.943168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.684 [2024-12-05 21:24:08.943176] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.684 [2024-12-05 21:24:08.955858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.684 [2024-12-05 21:24:08.956295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.684 [2024-12-05 21:24:08.956314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.684 [2024-12-05 21:24:08.956322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.684 [2024-12-05 21:24:08.956541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.684 [2024-12-05 21:24:08.956759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.684 [2024-12-05 21:24:08.956768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.684 [2024-12-05 21:24:08.956775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.684 [2024-12-05 21:24:08.956782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.684 [2024-12-05 21:24:08.969664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.684 [2024-12-05 21:24:08.970221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.684 [2024-12-05 21:24:08.970238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.684 [2024-12-05 21:24:08.970246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.684 [2024-12-05 21:24:08.970464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.684 [2024-12-05 21:24:08.970682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.684 [2024-12-05 21:24:08.970690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.684 [2024-12-05 21:24:08.970697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.684 [2024-12-05 21:24:08.970704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.684 [2024-12-05 21:24:08.983601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.684 [2024-12-05 21:24:08.984220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.684 [2024-12-05 21:24:08.984258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.684 [2024-12-05 21:24:08.984270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.684 [2024-12-05 21:24:08.984507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.684 [2024-12-05 21:24:08.984729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.684 [2024-12-05 21:24:08.984738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.684 [2024-12-05 21:24:08.984746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.684 [2024-12-05 21:24:08.984754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.684 [2024-12-05 21:24:08.997437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.684 [2024-12-05 21:24:08.998124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.684 [2024-12-05 21:24:08.998162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.684 [2024-12-05 21:24:08.998173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.684 [2024-12-05 21:24:08.998410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.684 [2024-12-05 21:24:08.998632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.684 [2024-12-05 21:24:08.998642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.684 [2024-12-05 21:24:08.998650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.684 [2024-12-05 21:24:08.998658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.684 [2024-12-05 21:24:09.011349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.684 [2024-12-05 21:24:09.011964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.684 [2024-12-05 21:24:09.012007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.684 [2024-12-05 21:24:09.012020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.684 [2024-12-05 21:24:09.012260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.684 [2024-12-05 21:24:09.012481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.684 [2024-12-05 21:24:09.012490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.684 [2024-12-05 21:24:09.012497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.684 [2024-12-05 21:24:09.012505] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.684 [2024-12-05 21:24:09.025192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.684 [2024-12-05 21:24:09.025825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.684 [2024-12-05 21:24:09.025871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.684 [2024-12-05 21:24:09.025884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.685 [2024-12-05 21:24:09.026121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.685 [2024-12-05 21:24:09.026343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.685 [2024-12-05 21:24:09.026352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.685 [2024-12-05 21:24:09.026359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.685 [2024-12-05 21:24:09.026367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.685 [2024-12-05 21:24:09.039047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.685 [2024-12-05 21:24:09.039729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.685 [2024-12-05 21:24:09.039768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.685 [2024-12-05 21:24:09.039779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.685 [2024-12-05 21:24:09.040025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.685 [2024-12-05 21:24:09.040248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.685 [2024-12-05 21:24:09.040257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.685 [2024-12-05 21:24:09.040264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.685 [2024-12-05 21:24:09.040272] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.685 [2024-12-05 21:24:09.052956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.685 [2024-12-05 21:24:09.053542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.685 [2024-12-05 21:24:09.053580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.685 [2024-12-05 21:24:09.053591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.685 [2024-12-05 21:24:09.053833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.685 [2024-12-05 21:24:09.054064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.685 [2024-12-05 21:24:09.054074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.685 [2024-12-05 21:24:09.054082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.685 [2024-12-05 21:24:09.054090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.685 [2024-12-05 21:24:09.066768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.685 [2024-12-05 21:24:09.067463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.685 [2024-12-05 21:24:09.067502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.685 [2024-12-05 21:24:09.067513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.685 [2024-12-05 21:24:09.067751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.685 [2024-12-05 21:24:09.067982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.685 [2024-12-05 21:24:09.067992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.685 [2024-12-05 21:24:09.067999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.685 [2024-12-05 21:24:09.068007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.685 [2024-12-05 21:24:09.080706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.685 [2024-12-05 21:24:09.081241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.685 [2024-12-05 21:24:09.081279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.685 [2024-12-05 21:24:09.081290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.685 [2024-12-05 21:24:09.081528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.685 [2024-12-05 21:24:09.081749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.685 [2024-12-05 21:24:09.081758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.685 [2024-12-05 21:24:09.081766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.685 [2024-12-05 21:24:09.081774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.685 [2024-12-05 21:24:09.094675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.685 [2024-12-05 21:24:09.095324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.685 [2024-12-05 21:24:09.095363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.685 [2024-12-05 21:24:09.095374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.685 [2024-12-05 21:24:09.095611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.685 [2024-12-05 21:24:09.095834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.685 [2024-12-05 21:24:09.095843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.685 [2024-12-05 21:24:09.095854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.685 [2024-12-05 21:24:09.095871] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.685 [2024-12-05 21:24:09.108560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.685 [2024-12-05 21:24:09.109208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.685 [2024-12-05 21:24:09.109246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.685 [2024-12-05 21:24:09.109257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.685 [2024-12-05 21:24:09.109494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.685 [2024-12-05 21:24:09.109716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.685 [2024-12-05 21:24:09.109725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.685 [2024-12-05 21:24:09.109733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.685 [2024-12-05 21:24:09.109741] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.949 [2024-12-05 21:24:09.122427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.949 [2024-12-05 21:24:09.123135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.949 [2024-12-05 21:24:09.123173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.949 [2024-12-05 21:24:09.123184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.949 [2024-12-05 21:24:09.123422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.949 [2024-12-05 21:24:09.123643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.949 [2024-12-05 21:24:09.123652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.949 [2024-12-05 21:24:09.123660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.949 [2024-12-05 21:24:09.123669] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.949 [2024-12-05 21:24:09.136359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.949 [2024-12-05 21:24:09.136976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.949 [2024-12-05 21:24:09.137015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.949 [2024-12-05 21:24:09.137028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.949 [2024-12-05 21:24:09.137267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.949 [2024-12-05 21:24:09.137488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.949 [2024-12-05 21:24:09.137497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.949 [2024-12-05 21:24:09.137505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.949 [2024-12-05 21:24:09.137513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.949 [2024-12-05 21:24:09.150213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.949 [2024-12-05 21:24:09.150944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.949 [2024-12-05 21:24:09.150983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.949 [2024-12-05 21:24:09.150995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.949 [2024-12-05 21:24:09.151234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.949 [2024-12-05 21:24:09.151456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.949 [2024-12-05 21:24:09.151466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.949 [2024-12-05 21:24:09.151474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.949 [2024-12-05 21:24:09.151482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.949 [2024-12-05 21:24:09.164179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.949 [2024-12-05 21:24:09.164756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.949 [2024-12-05 21:24:09.164775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.949 [2024-12-05 21:24:09.164783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.949 [2024-12-05 21:24:09.165007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.949 [2024-12-05 21:24:09.165226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.949 [2024-12-05 21:24:09.165234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.949 [2024-12-05 21:24:09.165242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.949 [2024-12-05 21:24:09.165249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.949 [2024-12-05 21:24:09.178145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.949 [2024-12-05 21:24:09.178580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.949 [2024-12-05 21:24:09.178597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.949 [2024-12-05 21:24:09.178605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.949 [2024-12-05 21:24:09.178822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.949 [2024-12-05 21:24:09.179110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.949 [2024-12-05 21:24:09.179119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.949 [2024-12-05 21:24:09.179127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.949 [2024-12-05 21:24:09.179133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.950 [2024-12-05 21:24:09.192011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.950 [2024-12-05 21:24:09.192650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.950 [2024-12-05 21:24:09.192688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.950 [2024-12-05 21:24:09.192703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.950 [2024-12-05 21:24:09.192948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.950 [2024-12-05 21:24:09.193171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.950 [2024-12-05 21:24:09.193180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.950 [2024-12-05 21:24:09.193187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.950 [2024-12-05 21:24:09.193196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.950 [2024-12-05 21:24:09.205883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.950 [2024-12-05 21:24:09.206337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.950 [2024-12-05 21:24:09.206356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.950 [2024-12-05 21:24:09.206364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.950 [2024-12-05 21:24:09.206582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.950 [2024-12-05 21:24:09.206800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.950 [2024-12-05 21:24:09.206808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.950 [2024-12-05 21:24:09.206816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.950 [2024-12-05 21:24:09.206823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.950 [2024-12-05 21:24:09.219705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.950 [2024-12-05 21:24:09.220258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.950 [2024-12-05 21:24:09.220275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.950 [2024-12-05 21:24:09.220283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.950 [2024-12-05 21:24:09.220500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.950 [2024-12-05 21:24:09.220718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.950 [2024-12-05 21:24:09.220725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.950 [2024-12-05 21:24:09.220733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.950 [2024-12-05 21:24:09.220739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.950 [2024-12-05 21:24:09.233622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.950 [2024-12-05 21:24:09.234158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.950 [2024-12-05 21:24:09.234175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.950 [2024-12-05 21:24:09.234183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.950 [2024-12-05 21:24:09.234401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.950 [2024-12-05 21:24:09.234623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.950 [2024-12-05 21:24:09.234631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.950 [2024-12-05 21:24:09.234638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.950 [2024-12-05 21:24:09.234644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.950 [2024-12-05 21:24:09.247522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.950 [2024-12-05 21:24:09.248070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.950 [2024-12-05 21:24:09.248087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.950 [2024-12-05 21:24:09.248095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.950 [2024-12-05 21:24:09.248312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.950 [2024-12-05 21:24:09.248531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.950 [2024-12-05 21:24:09.248540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.950 [2024-12-05 21:24:09.248547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.950 [2024-12-05 21:24:09.248554] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.950 [2024-12-05 21:24:09.261433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.950 [2024-12-05 21:24:09.261978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.950 [2024-12-05 21:24:09.261994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.950 [2024-12-05 21:24:09.262003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.950 [2024-12-05 21:24:09.262221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.950 [2024-12-05 21:24:09.262438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.950 [2024-12-05 21:24:09.262447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.950 [2024-12-05 21:24:09.262456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.950 [2024-12-05 21:24:09.262465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.950 [2024-12-05 21:24:09.275254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.950 [2024-12-05 21:24:09.275842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.950 [2024-12-05 21:24:09.275886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.950 [2024-12-05 21:24:09.275899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.950 [2024-12-05 21:24:09.276139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.950 [2024-12-05 21:24:09.276359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.950 [2024-12-05 21:24:09.276369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.950 [2024-12-05 21:24:09.276381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.950 [2024-12-05 21:24:09.276388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.950 [2024-12-05 21:24:09.289090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.950 [2024-12-05 21:24:09.289563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.950 [2024-12-05 21:24:09.289582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.950 [2024-12-05 21:24:09.289591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.950 [2024-12-05 21:24:09.289809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.950 [2024-12-05 21:24:09.290033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.950 [2024-12-05 21:24:09.290041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.950 [2024-12-05 21:24:09.290048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.950 [2024-12-05 21:24:09.290055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.950 [2024-12-05 21:24:09.302943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.950 [2024-12-05 21:24:09.303607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.950 [2024-12-05 21:24:09.303645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.950 [2024-12-05 21:24:09.303656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.950 [2024-12-05 21:24:09.303902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.950 [2024-12-05 21:24:09.304125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.950 [2024-12-05 21:24:09.304134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.950 [2024-12-05 21:24:09.304142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.950 [2024-12-05 21:24:09.304150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.950 [2024-12-05 21:24:09.316831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.950 [2024-12-05 21:24:09.317409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.950 [2024-12-05 21:24:09.317447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.950 [2024-12-05 21:24:09.317459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.950 [2024-12-05 21:24:09.317698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.950 [2024-12-05 21:24:09.317927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.951 [2024-12-05 21:24:09.317937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.951 [2024-12-05 21:24:09.317945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.951 [2024-12-05 21:24:09.317953] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.951 [2024-12-05 21:24:09.330640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.951 [2024-12-05 21:24:09.331314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.951 [2024-12-05 21:24:09.331352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.951 [2024-12-05 21:24:09.331364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.951 [2024-12-05 21:24:09.331601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.951 [2024-12-05 21:24:09.331823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.951 [2024-12-05 21:24:09.331832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.951 [2024-12-05 21:24:09.331839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.951 [2024-12-05 21:24:09.331848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.951 [2024-12-05 21:24:09.344535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.951 [2024-12-05 21:24:09.345185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.951 [2024-12-05 21:24:09.345224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.951 [2024-12-05 21:24:09.345235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.951 [2024-12-05 21:24:09.345472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.951 [2024-12-05 21:24:09.345694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.951 [2024-12-05 21:24:09.345703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.951 [2024-12-05 21:24:09.345710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.951 [2024-12-05 21:24:09.345718] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.951 [2024-12-05 21:24:09.358401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.951 [2024-12-05 21:24:09.358995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.951 [2024-12-05 21:24:09.359034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.951 [2024-12-05 21:24:09.359046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.951 [2024-12-05 21:24:09.359287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.951 [2024-12-05 21:24:09.359509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.951 [2024-12-05 21:24:09.359519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.951 [2024-12-05 21:24:09.359526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.951 [2024-12-05 21:24:09.359535] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.951 [2024-12-05 21:24:09.372228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.951 [2024-12-05 21:24:09.372936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.951 [2024-12-05 21:24:09.372975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:07.951 [2024-12-05 21:24:09.372992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:07.951 [2024-12-05 21:24:09.373244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:07.951 [2024-12-05 21:24:09.373467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.951 [2024-12-05 21:24:09.373476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.951 [2024-12-05 21:24:09.373484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.951 [2024-12-05 21:24:09.373491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.214 [2024-12-05 21:24:09.386187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.214 [2024-12-05 21:24:09.386753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.214 [2024-12-05 21:24:09.386772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.214 [2024-12-05 21:24:09.386780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.214 [2024-12-05 21:24:09.387005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.214 [2024-12-05 21:24:09.387223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.214 [2024-12-05 21:24:09.387232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.214 [2024-12-05 21:24:09.387240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.214 [2024-12-05 21:24:09.387246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.214 [2024-12-05 21:24:09.400128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.214 [2024-12-05 21:24:09.400653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.214 [2024-12-05 21:24:09.400669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.214 [2024-12-05 21:24:09.400677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.214 [2024-12-05 21:24:09.400901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.214 [2024-12-05 21:24:09.401119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.214 [2024-12-05 21:24:09.401127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.214 [2024-12-05 21:24:09.401134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.214 [2024-12-05 21:24:09.401142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.214 [2024-12-05 21:24:09.414012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.214 [2024-12-05 21:24:09.414582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.214 [2024-12-05 21:24:09.414598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.214 [2024-12-05 21:24:09.414606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.214 [2024-12-05 21:24:09.414823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.214 [2024-12-05 21:24:09.415051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.214 [2024-12-05 21:24:09.415061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.214 [2024-12-05 21:24:09.415067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.214 [2024-12-05 21:24:09.415074] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.214 [2024-12-05 21:24:09.427947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.214 [2024-12-05 21:24:09.428481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.214 [2024-12-05 21:24:09.428520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.214 [2024-12-05 21:24:09.428531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.214 [2024-12-05 21:24:09.428768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.214 [2024-12-05 21:24:09.428998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.214 [2024-12-05 21:24:09.429007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.214 [2024-12-05 21:24:09.429015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.214 [2024-12-05 21:24:09.429023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.214 [2024-12-05 21:24:09.441912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.214 [2024-12-05 21:24:09.442471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.214 [2024-12-05 21:24:09.442490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.214 [2024-12-05 21:24:09.442498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.214 [2024-12-05 21:24:09.442716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.214 [2024-12-05 21:24:09.442939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.214 [2024-12-05 21:24:09.442948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.215 [2024-12-05 21:24:09.442956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.215 [2024-12-05 21:24:09.442962] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.215 [2024-12-05 21:24:09.455839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.215 [2024-12-05 21:24:09.456412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.215 [2024-12-05 21:24:09.456429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.215 [2024-12-05 21:24:09.456437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.215 [2024-12-05 21:24:09.456654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.215 [2024-12-05 21:24:09.456877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.215 [2024-12-05 21:24:09.456886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.215 [2024-12-05 21:24:09.456898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.215 [2024-12-05 21:24:09.456905] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.215 [2024-12-05 21:24:09.469785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.215 [2024-12-05 21:24:09.470324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.215 [2024-12-05 21:24:09.470340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.215 [2024-12-05 21:24:09.470348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.215 [2024-12-05 21:24:09.470566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.215 [2024-12-05 21:24:09.470783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.215 [2024-12-05 21:24:09.470791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.215 [2024-12-05 21:24:09.470798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.215 [2024-12-05 21:24:09.470805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.215 [2024-12-05 21:24:09.483712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.215 [2024-12-05 21:24:09.484272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.215 [2024-12-05 21:24:09.484289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.215 [2024-12-05 21:24:09.484297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.215 [2024-12-05 21:24:09.484514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.215 [2024-12-05 21:24:09.484733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.215 [2024-12-05 21:24:09.484742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.215 [2024-12-05 21:24:09.484748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.215 [2024-12-05 21:24:09.484755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.215 [2024-12-05 21:24:09.497635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.215 [2024-12-05 21:24:09.498148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.215 [2024-12-05 21:24:09.498165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.215 [2024-12-05 21:24:09.498172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.215 [2024-12-05 21:24:09.498390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.215 [2024-12-05 21:24:09.498608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.215 [2024-12-05 21:24:09.498618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.215 [2024-12-05 21:24:09.498625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.215 [2024-12-05 21:24:09.498632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.215 [2024-12-05 21:24:09.511515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.215 [2024-12-05 21:24:09.511937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.215 [2024-12-05 21:24:09.511962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.215 [2024-12-05 21:24:09.511971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.215 [2024-12-05 21:24:09.512194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.215 [2024-12-05 21:24:09.512412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.215 [2024-12-05 21:24:09.512421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.215 [2024-12-05 21:24:09.512428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.215 [2024-12-05 21:24:09.512435] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.215 [2024-12-05 21:24:09.525317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.215 [2024-12-05 21:24:09.525850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.215 [2024-12-05 21:24:09.525872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.215 [2024-12-05 21:24:09.525881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.215 [2024-12-05 21:24:09.526098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.215 [2024-12-05 21:24:09.526316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.215 [2024-12-05 21:24:09.526324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.215 [2024-12-05 21:24:09.526331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.215 [2024-12-05 21:24:09.526338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.215 [2024-12-05 21:24:09.539218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.215 [2024-12-05 21:24:09.539879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.215 [2024-12-05 21:24:09.539917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.215 [2024-12-05 21:24:09.539930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.215 [2024-12-05 21:24:09.540171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.215 [2024-12-05 21:24:09.540392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.215 [2024-12-05 21:24:09.540401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.215 [2024-12-05 21:24:09.540409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.215 [2024-12-05 21:24:09.540416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.215 [2024-12-05 21:24:09.553327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.215 [2024-12-05 21:24:09.553996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.216 [2024-12-05 21:24:09.554034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.216 [2024-12-05 21:24:09.554051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.216 [2024-12-05 21:24:09.554292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.216 [2024-12-05 21:24:09.554515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.216 [2024-12-05 21:24:09.554524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.216 [2024-12-05 21:24:09.554531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.216 [2024-12-05 21:24:09.554539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.216 [2024-12-05 21:24:09.567235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.216 [2024-12-05 21:24:09.567792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.216 [2024-12-05 21:24:09.567812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.216 [2024-12-05 21:24:09.567820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.216 [2024-12-05 21:24:09.568043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.216 [2024-12-05 21:24:09.568262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.216 [2024-12-05 21:24:09.568270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.216 [2024-12-05 21:24:09.568277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.216 [2024-12-05 21:24:09.568284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.216 [2024-12-05 21:24:09.581192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.216 [2024-12-05 21:24:09.581659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.216 [2024-12-05 21:24:09.581697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.216 [2024-12-05 21:24:09.581710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.216 [2024-12-05 21:24:09.581956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.216 [2024-12-05 21:24:09.582179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.216 [2024-12-05 21:24:09.582188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.216 [2024-12-05 21:24:09.582196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.216 [2024-12-05 21:24:09.582204] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.216 [2024-12-05 21:24:09.595097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.216 [2024-12-05 21:24:09.595684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.216 [2024-12-05 21:24:09.595703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.216 [2024-12-05 21:24:09.595711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.216 [2024-12-05 21:24:09.595936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.216 [2024-12-05 21:24:09.596163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.216 [2024-12-05 21:24:09.596171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.216 [2024-12-05 21:24:09.596178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.216 [2024-12-05 21:24:09.596185] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.216 [2024-12-05 21:24:09.609070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.216 [2024-12-05 21:24:09.609616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.216 [2024-12-05 21:24:09.609634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.216 [2024-12-05 21:24:09.609641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.216 [2024-12-05 21:24:09.609858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.216 [2024-12-05 21:24:09.610082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.216 [2024-12-05 21:24:09.610090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.216 [2024-12-05 21:24:09.610097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.216 [2024-12-05 21:24:09.610104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.216 [2024-12-05 21:24:09.622998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.216 [2024-12-05 21:24:09.623610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.216 [2024-12-05 21:24:09.623649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.216 [2024-12-05 21:24:09.623660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.216 [2024-12-05 21:24:09.623905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.216 [2024-12-05 21:24:09.624128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.216 [2024-12-05 21:24:09.624137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.216 [2024-12-05 21:24:09.624145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.216 [2024-12-05 21:24:09.624153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.216 [2024-12-05 21:24:09.636845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.216 [2024-12-05 21:24:09.637492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.216 [2024-12-05 21:24:09.637530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.216 [2024-12-05 21:24:09.637541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.216 [2024-12-05 21:24:09.637778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.216 [2024-12-05 21:24:09.638011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.216 [2024-12-05 21:24:09.638021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.216 [2024-12-05 21:24:09.638029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.216 [2024-12-05 21:24:09.638041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.479 [2024-12-05 21:24:09.650725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.479 [2024-12-05 21:24:09.651374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.479 [2024-12-05 21:24:09.651412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.479 [2024-12-05 21:24:09.651423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.479 [2024-12-05 21:24:09.651661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.479 [2024-12-05 21:24:09.651890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.479 [2024-12-05 21:24:09.651900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.479 [2024-12-05 21:24:09.651908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.479 [2024-12-05 21:24:09.651916] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.479 7289.25 IOPS, 28.47 MiB/s [2024-12-05T20:24:09.916Z] [2024-12-05 21:24:09.666260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.479 [2024-12-05 21:24:09.666845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.479 [2024-12-05 21:24:09.666870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.479 [2024-12-05 21:24:09.666878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.479 [2024-12-05 21:24:09.667097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.479 [2024-12-05 21:24:09.667316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.479 [2024-12-05 21:24:09.667324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.479 [2024-12-05 21:24:09.667331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.479 [2024-12-05 21:24:09.667338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.479 [2024-12-05 21:24:09.680246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.479 [2024-12-05 21:24:09.680822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.479 [2024-12-05 21:24:09.680840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.479 [2024-12-05 21:24:09.680848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.479 [2024-12-05 21:24:09.681070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.479 [2024-12-05 21:24:09.681289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.479 [2024-12-05 21:24:09.681296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.480 [2024-12-05 21:24:09.681303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.480 [2024-12-05 21:24:09.681310] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.480 [2024-12-05 21:24:09.694330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.480 [2024-12-05 21:24:09.694952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.480 [2024-12-05 21:24:09.694990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.480 [2024-12-05 21:24:09.695003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.480 [2024-12-05 21:24:09.695244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.480 [2024-12-05 21:24:09.695466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.480 [2024-12-05 21:24:09.695474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.480 [2024-12-05 21:24:09.695482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.480 [2024-12-05 21:24:09.695490] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.480 [2024-12-05 21:24:09.708187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.480 [2024-12-05 21:24:09.708820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.480 [2024-12-05 21:24:09.708858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.480 [2024-12-05 21:24:09.708878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.480 [2024-12-05 21:24:09.709117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.480 [2024-12-05 21:24:09.709339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.480 [2024-12-05 21:24:09.709348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.480 [2024-12-05 21:24:09.709356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.480 [2024-12-05 21:24:09.709364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.480 [2024-12-05 21:24:09.722040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.480 [2024-12-05 21:24:09.722717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.480 [2024-12-05 21:24:09.722754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.480 [2024-12-05 21:24:09.722765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.480 [2024-12-05 21:24:09.723010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.480 [2024-12-05 21:24:09.723233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.480 [2024-12-05 21:24:09.723242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.480 [2024-12-05 21:24:09.723249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.480 [2024-12-05 21:24:09.723258] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.480 [2024-12-05 21:24:09.735942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.480 [2024-12-05 21:24:09.736485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.480 [2024-12-05 21:24:09.736504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.480 [2024-12-05 21:24:09.736517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.480 [2024-12-05 21:24:09.736735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.480 [2024-12-05 21:24:09.736960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.480 [2024-12-05 21:24:09.736969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.480 [2024-12-05 21:24:09.736976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.480 [2024-12-05 21:24:09.736983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.480 [2024-12-05 21:24:09.749866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.480 [2024-12-05 21:24:09.750490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.480 [2024-12-05 21:24:09.750528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.480 [2024-12-05 21:24:09.750539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.480 [2024-12-05 21:24:09.750776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.480 [2024-12-05 21:24:09.751005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.480 [2024-12-05 21:24:09.751016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.480 [2024-12-05 21:24:09.751025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.480 [2024-12-05 21:24:09.751034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.480 [2024-12-05 21:24:09.763718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.480 [2024-12-05 21:24:09.764247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.480 [2024-12-05 21:24:09.764267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.480 [2024-12-05 21:24:09.764276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.480 [2024-12-05 21:24:09.764495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.480 [2024-12-05 21:24:09.764714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.480 [2024-12-05 21:24:09.764722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.480 [2024-12-05 21:24:09.764730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.480 [2024-12-05 21:24:09.764738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.480 [2024-12-05 21:24:09.777643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.480 [2024-12-05 21:24:09.778318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.480 [2024-12-05 21:24:09.778356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.480 [2024-12-05 21:24:09.778368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.480 [2024-12-05 21:24:09.778608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.480 [2024-12-05 21:24:09.778834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.480 [2024-12-05 21:24:09.778843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.480 [2024-12-05 21:24:09.778851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.480 [2024-12-05 21:24:09.778859] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.480 [2024-12-05 21:24:09.791558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.480 [2024-12-05 21:24:09.792050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.481 [2024-12-05 21:24:09.792071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.481 [2024-12-05 21:24:09.792079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.481 [2024-12-05 21:24:09.792298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.481 [2024-12-05 21:24:09.792516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.481 [2024-12-05 21:24:09.792524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.481 [2024-12-05 21:24:09.792532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.481 [2024-12-05 21:24:09.792539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.481 [2024-12-05 21:24:09.805428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.481 [2024-12-05 21:24:09.805968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.481 [2024-12-05 21:24:09.805985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.481 [2024-12-05 21:24:09.805993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.481 [2024-12-05 21:24:09.806211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.481 [2024-12-05 21:24:09.806428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.481 [2024-12-05 21:24:09.806436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.481 [2024-12-05 21:24:09.806443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.481 [2024-12-05 21:24:09.806450] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.481 [2024-12-05 21:24:09.819333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.481 [2024-12-05 21:24:09.819905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.481 [2024-12-05 21:24:09.819945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.481 [2024-12-05 21:24:09.819957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.481 [2024-12-05 21:24:09.820199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.481 [2024-12-05 21:24:09.820421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.481 [2024-12-05 21:24:09.820430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.481 [2024-12-05 21:24:09.820438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.481 [2024-12-05 21:24:09.820450] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.481 [2024-12-05 21:24:09.833150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.481 [2024-12-05 21:24:09.833786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.481 [2024-12-05 21:24:09.833824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.481 [2024-12-05 21:24:09.833837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.481 [2024-12-05 21:24:09.834086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.481 [2024-12-05 21:24:09.834310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.481 [2024-12-05 21:24:09.834319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.481 [2024-12-05 21:24:09.834326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.481 [2024-12-05 21:24:09.834334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.481 [2024-12-05 21:24:09.847021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.481 [2024-12-05 21:24:09.847701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.481 [2024-12-05 21:24:09.847739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.481 [2024-12-05 21:24:09.847750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.481 [2024-12-05 21:24:09.847995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.481 [2024-12-05 21:24:09.848217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.481 [2024-12-05 21:24:09.848226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.481 [2024-12-05 21:24:09.848234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.481 [2024-12-05 21:24:09.848242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.481 [2024-12-05 21:24:09.860923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.481 [2024-12-05 21:24:09.861470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.481 [2024-12-05 21:24:09.861490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.481 [2024-12-05 21:24:09.861498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.481 [2024-12-05 21:24:09.861717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.481 [2024-12-05 21:24:09.861941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.481 [2024-12-05 21:24:09.861951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.481 [2024-12-05 21:24:09.861958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.481 [2024-12-05 21:24:09.861965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.481 [2024-12-05 21:24:09.874870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.481 [2024-12-05 21:24:09.875537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.481 [2024-12-05 21:24:09.875576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.481 [2024-12-05 21:24:09.875587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.481 [2024-12-05 21:24:09.875825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.481 [2024-12-05 21:24:09.876056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.481 [2024-12-05 21:24:09.876066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.481 [2024-12-05 21:24:09.876073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.481 [2024-12-05 21:24:09.876081] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.481 [2024-12-05 21:24:09.888772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.481 [2024-12-05 21:24:09.889423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.481 [2024-12-05 21:24:09.889461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.481 [2024-12-05 21:24:09.889472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.481 [2024-12-05 21:24:09.889710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.481 [2024-12-05 21:24:09.889940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.481 [2024-12-05 21:24:09.889950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.481 [2024-12-05 21:24:09.889957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.482 [2024-12-05 21:24:09.889966] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.482 [2024-12-05 21:24:09.902640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.482 [2024-12-05 21:24:09.903239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.482 [2024-12-05 21:24:09.903260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.482 [2024-12-05 21:24:09.903268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.482 [2024-12-05 21:24:09.903486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.482 [2024-12-05 21:24:09.903704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.482 [2024-12-05 21:24:09.903712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.482 [2024-12-05 21:24:09.903720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.482 [2024-12-05 21:24:09.903726] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.746 [2024-12-05 21:24:09.916612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.746 [2024-12-05 21:24:09.917151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.746 [2024-12-05 21:24:09.917188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.746 [2024-12-05 21:24:09.917201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.746 [2024-12-05 21:24:09.917444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.746 [2024-12-05 21:24:09.917666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.746 [2024-12-05 21:24:09.917676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.746 [2024-12-05 21:24:09.917684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.746 [2024-12-05 21:24:09.917692] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.746 [2024-12-05 21:24:09.930388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.746 [2024-12-05 21:24:09.931007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.746 [2024-12-05 21:24:09.931046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.746 [2024-12-05 21:24:09.931058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.746 [2024-12-05 21:24:09.931298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.746 [2024-12-05 21:24:09.931520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.746 [2024-12-05 21:24:09.931529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.746 [2024-12-05 21:24:09.931536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.746 [2024-12-05 21:24:09.931544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.746 [2024-12-05 21:24:09.944236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.746 [2024-12-05 21:24:09.944821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.746 [2024-12-05 21:24:09.944840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.746 [2024-12-05 21:24:09.944848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.746 [2024-12-05 21:24:09.945072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.746 [2024-12-05 21:24:09.945290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.746 [2024-12-05 21:24:09.945300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.746 [2024-12-05 21:24:09.945307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.746 [2024-12-05 21:24:09.945313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.746 [2024-12-05 21:24:09.958201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.746 [2024-12-05 21:24:09.958732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.746 [2024-12-05 21:24:09.958748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.746 [2024-12-05 21:24:09.958756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.746 [2024-12-05 21:24:09.958979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.746 [2024-12-05 21:24:09.959197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.746 [2024-12-05 21:24:09.959211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.746 [2024-12-05 21:24:09.959218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.746 [2024-12-05 21:24:09.959225] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.746 [2024-12-05 21:24:09.972112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.746 [2024-12-05 21:24:09.972723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.746 [2024-12-05 21:24:09.972761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.746 [2024-12-05 21:24:09.972772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.746 [2024-12-05 21:24:09.973018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.746 [2024-12-05 21:24:09.973241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.746 [2024-12-05 21:24:09.973250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.747 [2024-12-05 21:24:09.973258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.747 [2024-12-05 21:24:09.973266] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.747 [2024-12-05 21:24:09.985970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.747 [2024-12-05 21:24:09.986647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.747 [2024-12-05 21:24:09.986685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.747 [2024-12-05 21:24:09.986696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.747 [2024-12-05 21:24:09.986944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.747 [2024-12-05 21:24:09.987167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.747 [2024-12-05 21:24:09.987176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.747 [2024-12-05 21:24:09.987184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.747 [2024-12-05 21:24:09.987192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.747 [2024-12-05 21:24:09.999887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.747 [2024-12-05 21:24:10.000454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.747 [2024-12-05 21:24:10.000473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.747 [2024-12-05 21:24:10.000481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.747 [2024-12-05 21:24:10.000700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.747 [2024-12-05 21:24:10.000924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.747 [2024-12-05 21:24:10.000933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.747 [2024-12-05 21:24:10.000941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.747 [2024-12-05 21:24:10.000952] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.747 [2024-12-05 21:24:10.013721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.747 [2024-12-05 21:24:10.014355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.747 [2024-12-05 21:24:10.014394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.747 [2024-12-05 21:24:10.014407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.747 [2024-12-05 21:24:10.014647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.747 [2024-12-05 21:24:10.014878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.747 [2024-12-05 21:24:10.014889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.747 [2024-12-05 21:24:10.014897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.747 [2024-12-05 21:24:10.014906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.747 [2024-12-05 21:24:10.027610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.747 [2024-12-05 21:24:10.028188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.747 [2024-12-05 21:24:10.028208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.747 [2024-12-05 21:24:10.028216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.747 [2024-12-05 21:24:10.028435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.747 [2024-12-05 21:24:10.028655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.747 [2024-12-05 21:24:10.028663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.747 [2024-12-05 21:24:10.028671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.747 [2024-12-05 21:24:10.028678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.747 [2024-12-05 21:24:10.041589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.747 [2024-12-05 21:24:10.042117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.747 [2024-12-05 21:24:10.042135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.747 [2024-12-05 21:24:10.042143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.747 [2024-12-05 21:24:10.042361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.747 [2024-12-05 21:24:10.042579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.747 [2024-12-05 21:24:10.042587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.747 [2024-12-05 21:24:10.042594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.747 [2024-12-05 21:24:10.042601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.747 [2024-12-05 21:24:10.055522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.747 [2024-12-05 21:24:10.056195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.747 [2024-12-05 21:24:10.056233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.747 [2024-12-05 21:24:10.056244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.747 [2024-12-05 21:24:10.056482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.747 [2024-12-05 21:24:10.056704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.747 [2024-12-05 21:24:10.056714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.747 [2024-12-05 21:24:10.056721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.747 [2024-12-05 21:24:10.056729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.747 [2024-12-05 21:24:10.069463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.747 [2024-12-05 21:24:10.070160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.747 [2024-12-05 21:24:10.070198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.747 [2024-12-05 21:24:10.070210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.747 [2024-12-05 21:24:10.070448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.747 [2024-12-05 21:24:10.070670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.747 [2024-12-05 21:24:10.070679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.747 [2024-12-05 21:24:10.070687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.747 [2024-12-05 21:24:10.070695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.747 [2024-12-05 21:24:10.083415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.747 [2024-12-05 21:24:10.083874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.747 [2024-12-05 21:24:10.083894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.747 [2024-12-05 21:24:10.083903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.747 [2024-12-05 21:24:10.084122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.747 [2024-12-05 21:24:10.084340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.747 [2024-12-05 21:24:10.084350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.747 [2024-12-05 21:24:10.084357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.747 [2024-12-05 21:24:10.084364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.747 [2024-12-05 21:24:10.097266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.747 [2024-12-05 21:24:10.097802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.747 [2024-12-05 21:24:10.097819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.747 [2024-12-05 21:24:10.097827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.747 [2024-12-05 21:24:10.098057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.747 [2024-12-05 21:24:10.098276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.747 [2024-12-05 21:24:10.098284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.747 [2024-12-05 21:24:10.098291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.747 [2024-12-05 21:24:10.098298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.747 [2024-12-05 21:24:10.111205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.747 [2024-12-05 21:24:10.111791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.747 [2024-12-05 21:24:10.111808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.747 [2024-12-05 21:24:10.111815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.747 [2024-12-05 21:24:10.112039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.747 [2024-12-05 21:24:10.112257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.747 [2024-12-05 21:24:10.112265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.748 [2024-12-05 21:24:10.112272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.748 [2024-12-05 21:24:10.112279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.748 [2024-12-05 21:24:10.124985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.748 [2024-12-05 21:24:10.125528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.748 [2024-12-05 21:24:10.125544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.748 [2024-12-05 21:24:10.125552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.748 [2024-12-05 21:24:10.125770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.748 [2024-12-05 21:24:10.125994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.748 [2024-12-05 21:24:10.126003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.748 [2024-12-05 21:24:10.126010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.748 [2024-12-05 21:24:10.126017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.748 [2024-12-05 21:24:10.138917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.748 [2024-12-05 21:24:10.139515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.748 [2024-12-05 21:24:10.139553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.748 [2024-12-05 21:24:10.139564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.748 [2024-12-05 21:24:10.139802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.748 [2024-12-05 21:24:10.140034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.748 [2024-12-05 21:24:10.140048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.748 [2024-12-05 21:24:10.140056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.748 [2024-12-05 21:24:10.140064] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.748 [2024-12-05 21:24:10.152758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.748 [2024-12-05 21:24:10.153314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.748 [2024-12-05 21:24:10.153334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.748 [2024-12-05 21:24:10.153342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.748 [2024-12-05 21:24:10.153561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.748 [2024-12-05 21:24:10.153779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.748 [2024-12-05 21:24:10.153787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.748 [2024-12-05 21:24:10.153794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.748 [2024-12-05 21:24:10.153801] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:08.748 [2024-12-05 21:24:10.166704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:08.748 [2024-12-05 21:24:10.167215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.748 [2024-12-05 21:24:10.167233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:08.748 [2024-12-05 21:24:10.167241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:08.748 [2024-12-05 21:24:10.167459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:08.748 [2024-12-05 21:24:10.167677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:08.748 [2024-12-05 21:24:10.167684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:08.748 [2024-12-05 21:24:10.167691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:08.748 [2024-12-05 21:24:10.167698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.033 [2024-12-05 21:24:10.180617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.033 [2024-12-05 21:24:10.181215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.033 [2024-12-05 21:24:10.181232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.033 [2024-12-05 21:24:10.181240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.033 [2024-12-05 21:24:10.181457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.033 [2024-12-05 21:24:10.181674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.033 [2024-12-05 21:24:10.181683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.033 [2024-12-05 21:24:10.181691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.033 [2024-12-05 21:24:10.181702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.033 [2024-12-05 21:24:10.194398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.033 [2024-12-05 21:24:10.195000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.033 [2024-12-05 21:24:10.195017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.033 [2024-12-05 21:24:10.195025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.033 [2024-12-05 21:24:10.195242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.033 [2024-12-05 21:24:10.195459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.033 [2024-12-05 21:24:10.195475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.033 [2024-12-05 21:24:10.195482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.033 [2024-12-05 21:24:10.195489] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.033 [2024-12-05 21:24:10.208179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.033 [2024-12-05 21:24:10.208704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.033 [2024-12-05 21:24:10.208720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.033 [2024-12-05 21:24:10.208727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.033 [2024-12-05 21:24:10.208950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.033 [2024-12-05 21:24:10.209168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.033 [2024-12-05 21:24:10.209176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.033 [2024-12-05 21:24:10.209183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.033 [2024-12-05 21:24:10.209190] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.033 [2024-12-05 21:24:10.222084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.033 [2024-12-05 21:24:10.222570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.033 [2024-12-05 21:24:10.222585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.033 [2024-12-05 21:24:10.222592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.033 [2024-12-05 21:24:10.222810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.033 [2024-12-05 21:24:10.223034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.033 [2024-12-05 21:24:10.223042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.033 [2024-12-05 21:24:10.223049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.033 [2024-12-05 21:24:10.223055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.033 [2024-12-05 21:24:10.235952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.033 [2024-12-05 21:24:10.236474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.033 [2024-12-05 21:24:10.236494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.033 [2024-12-05 21:24:10.236502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.033 [2024-12-05 21:24:10.236719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.033 [2024-12-05 21:24:10.236942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.033 [2024-12-05 21:24:10.236952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.034 [2024-12-05 21:24:10.236960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.034 [2024-12-05 21:24:10.236966] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.034 [2024-12-05 21:24:10.249858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.034 [2024-12-05 21:24:10.250340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.034 [2024-12-05 21:24:10.250356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.034 [2024-12-05 21:24:10.250364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.034 [2024-12-05 21:24:10.250581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.034 [2024-12-05 21:24:10.250799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.034 [2024-12-05 21:24:10.250807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.034 [2024-12-05 21:24:10.250814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.034 [2024-12-05 21:24:10.250821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.034 [2024-12-05 21:24:10.263734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.034 [2024-12-05 21:24:10.264228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.034 [2024-12-05 21:24:10.264245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.034 [2024-12-05 21:24:10.264253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.034 [2024-12-05 21:24:10.264472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.034 [2024-12-05 21:24:10.264689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.034 [2024-12-05 21:24:10.264698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.034 [2024-12-05 21:24:10.264705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.034 [2024-12-05 21:24:10.264711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.034 [2024-12-05 21:24:10.277621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.034 [2024-12-05 21:24:10.278186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.034 [2024-12-05 21:24:10.278202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.034 [2024-12-05 21:24:10.278210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.034 [2024-12-05 21:24:10.278435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.034 [2024-12-05 21:24:10.278653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.034 [2024-12-05 21:24:10.278661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.034 [2024-12-05 21:24:10.278669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.034 [2024-12-05 21:24:10.278675] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.034 [2024-12-05 21:24:10.291566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.034 [2024-12-05 21:24:10.292234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.034 [2024-12-05 21:24:10.292272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.034 [2024-12-05 21:24:10.292283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.034 [2024-12-05 21:24:10.292520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.034 [2024-12-05 21:24:10.292741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.034 [2024-12-05 21:24:10.292750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.034 [2024-12-05 21:24:10.292758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.034 [2024-12-05 21:24:10.292766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.034 [2024-12-05 21:24:10.305373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.034 [2024-12-05 21:24:10.305831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.034 [2024-12-05 21:24:10.305852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.034 [2024-12-05 21:24:10.305860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.034 [2024-12-05 21:24:10.306090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.034 [2024-12-05 21:24:10.306308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.034 [2024-12-05 21:24:10.306317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.034 [2024-12-05 21:24:10.306324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.034 [2024-12-05 21:24:10.306332] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.034 [2024-12-05 21:24:10.319230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.034 [2024-12-05 21:24:10.319705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.034 [2024-12-05 21:24:10.319722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.034 [2024-12-05 21:24:10.319730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.034 [2024-12-05 21:24:10.319954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.034 [2024-12-05 21:24:10.320172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.034 [2024-12-05 21:24:10.320185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.034 [2024-12-05 21:24:10.320192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.034 [2024-12-05 21:24:10.320199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.034 [2024-12-05 21:24:10.333099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.034 [2024-12-05 21:24:10.333529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.034 [2024-12-05 21:24:10.333546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.034 [2024-12-05 21:24:10.333554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.034 [2024-12-05 21:24:10.333772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.034 [2024-12-05 21:24:10.333996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.034 [2024-12-05 21:24:10.334005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.034 [2024-12-05 21:24:10.334012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.034 [2024-12-05 21:24:10.334018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.034 [2024-12-05 21:24:10.346920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.034 [2024-12-05 21:24:10.347457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.034 [2024-12-05 21:24:10.347474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.034 [2024-12-05 21:24:10.347482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.034 [2024-12-05 21:24:10.347699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.034 [2024-12-05 21:24:10.347924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.034 [2024-12-05 21:24:10.347933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.034 [2024-12-05 21:24:10.347940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.034 [2024-12-05 21:24:10.347946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.034 [2024-12-05 21:24:10.360838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.034 [2024-12-05 21:24:10.361420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.034 [2024-12-05 21:24:10.361437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.034 [2024-12-05 21:24:10.361444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.034 [2024-12-05 21:24:10.361662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.034 [2024-12-05 21:24:10.361885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.034 [2024-12-05 21:24:10.361894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.034 [2024-12-05 21:24:10.361901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.034 [2024-12-05 21:24:10.361907] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.034 [2024-12-05 21:24:10.374804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.034 [2024-12-05 21:24:10.375343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.034 [2024-12-05 21:24:10.375359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.034 [2024-12-05 21:24:10.375367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.034 [2024-12-05 21:24:10.375584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.034 [2024-12-05 21:24:10.375801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.034 [2024-12-05 21:24:10.375809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.034 [2024-12-05 21:24:10.375816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.035 [2024-12-05 21:24:10.375823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.035 [2024-12-05 21:24:10.388737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.035 [2024-12-05 21:24:10.389191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.035 [2024-12-05 21:24:10.389208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.035 [2024-12-05 21:24:10.389215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.035 [2024-12-05 21:24:10.389432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.035 [2024-12-05 21:24:10.389650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.035 [2024-12-05 21:24:10.389657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.035 [2024-12-05 21:24:10.389664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.035 [2024-12-05 21:24:10.389671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.035 [2024-12-05 21:24:10.402625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.035 [2024-12-05 21:24:10.403167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.035 [2024-12-05 21:24:10.403184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.035 [2024-12-05 21:24:10.403192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.035 [2024-12-05 21:24:10.403409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.035 [2024-12-05 21:24:10.403627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.035 [2024-12-05 21:24:10.403635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.035 [2024-12-05 21:24:10.403642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.035 [2024-12-05 21:24:10.403649] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.035 [2024-12-05 21:24:10.416535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.035 [2024-12-05 21:24:10.417073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.035 [2024-12-05 21:24:10.417094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.035 [2024-12-05 21:24:10.417101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.035 [2024-12-05 21:24:10.417319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.035 [2024-12-05 21:24:10.417536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.035 [2024-12-05 21:24:10.417544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.035 [2024-12-05 21:24:10.417551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.035 [2024-12-05 21:24:10.417557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.035 [2024-12-05 21:24:10.430449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.035 [2024-12-05 21:24:10.431157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.035 [2024-12-05 21:24:10.431195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.035 [2024-12-05 21:24:10.431206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.035 [2024-12-05 21:24:10.431444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.035 [2024-12-05 21:24:10.431665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.035 [2024-12-05 21:24:10.431675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.035 [2024-12-05 21:24:10.431682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.035 [2024-12-05 21:24:10.431690] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.035 [2024-12-05 21:24:10.444372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.035 [2024-12-05 21:24:10.444973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.035 [2024-12-05 21:24:10.445010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.035 [2024-12-05 21:24:10.445023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.035 [2024-12-05 21:24:10.445264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.035 [2024-12-05 21:24:10.445485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.035 [2024-12-05 21:24:10.445494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.035 [2024-12-05 21:24:10.445502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.035 [2024-12-05 21:24:10.445510] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.371 [2024-12-05 21:24:10.458219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.371 [2024-12-05 21:24:10.458725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.371 [2024-12-05 21:24:10.458763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.371 [2024-12-05 21:24:10.458776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.371 [2024-12-05 21:24:10.459032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.371 [2024-12-05 21:24:10.459255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.371 [2024-12-05 21:24:10.459264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.371 [2024-12-05 21:24:10.459271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.371 [2024-12-05 21:24:10.459279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.371 [2024-12-05 21:24:10.472172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.371 [2024-12-05 21:24:10.472816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.371 [2024-12-05 21:24:10.472854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.371 [2024-12-05 21:24:10.472874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.371 [2024-12-05 21:24:10.473113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.371 [2024-12-05 21:24:10.473335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.371 [2024-12-05 21:24:10.473344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.372 [2024-12-05 21:24:10.473351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.372 [2024-12-05 21:24:10.473359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.372 [2024-12-05 21:24:10.486066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.372 [2024-12-05 21:24:10.486744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.372 [2024-12-05 21:24:10.486783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.372 [2024-12-05 21:24:10.486793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.372 [2024-12-05 21:24:10.487039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.372 [2024-12-05 21:24:10.487262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.372 [2024-12-05 21:24:10.487271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.372 [2024-12-05 21:24:10.487279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.372 [2024-12-05 21:24:10.487286] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.372 [2024-12-05 21:24:10.499988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.372 [2024-12-05 21:24:10.500646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.372 [2024-12-05 21:24:10.500683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.372 [2024-12-05 21:24:10.500694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.372 [2024-12-05 21:24:10.500939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.372 [2024-12-05 21:24:10.501162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.372 [2024-12-05 21:24:10.501177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.372 [2024-12-05 21:24:10.501190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.372 [2024-12-05 21:24:10.501199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.372 [2024-12-05 21:24:10.513897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.372 [2024-12-05 21:24:10.514404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.372 [2024-12-05 21:24:10.514423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.372 [2024-12-05 21:24:10.514431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.372 [2024-12-05 21:24:10.514650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.372 [2024-12-05 21:24:10.514874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.372 [2024-12-05 21:24:10.514883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.372 [2024-12-05 21:24:10.514890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.372 [2024-12-05 21:24:10.514897] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.372 [2024-12-05 21:24:10.527769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.372 [2024-12-05 21:24:10.528426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.372 [2024-12-05 21:24:10.528463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.372 [2024-12-05 21:24:10.528474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.372 [2024-12-05 21:24:10.528712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.372 [2024-12-05 21:24:10.528941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.372 [2024-12-05 21:24:10.528951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.372 [2024-12-05 21:24:10.528958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.372 [2024-12-05 21:24:10.528966] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.372 [2024-12-05 21:24:10.541641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.372 [2024-12-05 21:24:10.542317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.372 [2024-12-05 21:24:10.542355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.372 [2024-12-05 21:24:10.542368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.372 [2024-12-05 21:24:10.542606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.372 [2024-12-05 21:24:10.542828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.372 [2024-12-05 21:24:10.542837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.372 [2024-12-05 21:24:10.542844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.372 [2024-12-05 21:24:10.542852] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.372 [2024-12-05 21:24:10.555557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.372 [2024-12-05 21:24:10.556265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.372 [2024-12-05 21:24:10.556303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.372 [2024-12-05 21:24:10.556314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.372 [2024-12-05 21:24:10.556551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.372 [2024-12-05 21:24:10.556773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.372 [2024-12-05 21:24:10.556782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.372 [2024-12-05 21:24:10.556790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.372 [2024-12-05 21:24:10.556798] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.372 [2024-12-05 21:24:10.569484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.372 [2024-12-05 21:24:10.570091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.372 [2024-12-05 21:24:10.570129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.372 [2024-12-05 21:24:10.570140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.372 [2024-12-05 21:24:10.570377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.372 [2024-12-05 21:24:10.570599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.372 [2024-12-05 21:24:10.570608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.372 [2024-12-05 21:24:10.570616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.372 [2024-12-05 21:24:10.570624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.372 [2024-12-05 21:24:10.583332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.372 [2024-12-05 21:24:10.583961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.372 [2024-12-05 21:24:10.583999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.372 [2024-12-05 21:24:10.584011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.372 [2024-12-05 21:24:10.584250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.372 [2024-12-05 21:24:10.584471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.372 [2024-12-05 21:24:10.584480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.372 [2024-12-05 21:24:10.584488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.372 [2024-12-05 21:24:10.584496] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.372 [2024-12-05 21:24:10.597183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.372 [2024-12-05 21:24:10.597835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.372 [2024-12-05 21:24:10.597881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.372 [2024-12-05 21:24:10.597897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.372 [2024-12-05 21:24:10.598136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.372 [2024-12-05 21:24:10.598358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.372 [2024-12-05 21:24:10.598367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.372 [2024-12-05 21:24:10.598374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.372 [2024-12-05 21:24:10.598382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.372 [2024-12-05 21:24:10.611076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.372 [2024-12-05 21:24:10.611612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.372 [2024-12-05 21:24:10.611631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.372 [2024-12-05 21:24:10.611639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.372 [2024-12-05 21:24:10.611857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.372 [2024-12-05 21:24:10.612085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.372 [2024-12-05 21:24:10.612093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.373 [2024-12-05 21:24:10.612100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.373 [2024-12-05 21:24:10.612107] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.373 [2024-12-05 21:24:10.625007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.373 [2024-12-05 21:24:10.625528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.373 [2024-12-05 21:24:10.625546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.373 [2024-12-05 21:24:10.625553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.373 [2024-12-05 21:24:10.625771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.373 [2024-12-05 21:24:10.625994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.373 [2024-12-05 21:24:10.626003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.373 [2024-12-05 21:24:10.626010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.373 [2024-12-05 21:24:10.626017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.373 [2024-12-05 21:24:10.638916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.373 [2024-12-05 21:24:10.639577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.373 [2024-12-05 21:24:10.639615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.373 [2024-12-05 21:24:10.639626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.373 [2024-12-05 21:24:10.639873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.373 [2024-12-05 21:24:10.640101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.373 [2024-12-05 21:24:10.640110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.373 [2024-12-05 21:24:10.640118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.373 [2024-12-05 21:24:10.640126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.373 [2024-12-05 21:24:10.652866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.373 [2024-12-05 21:24:10.653443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.373 [2024-12-05 21:24:10.653481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.373 [2024-12-05 21:24:10.653494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.373 [2024-12-05 21:24:10.653734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.373 [2024-12-05 21:24:10.653963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.373 [2024-12-05 21:24:10.653974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.373 [2024-12-05 21:24:10.653982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.373 [2024-12-05 21:24:10.653990] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.373 5831.40 IOPS, 22.78 MiB/s [2024-12-05T20:24:10.810Z] [2024-12-05 21:24:10.668350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.373 [2024-12-05 21:24:10.668984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.373 [2024-12-05 21:24:10.669022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.373 [2024-12-05 21:24:10.669034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.373 [2024-12-05 21:24:10.669273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.373 [2024-12-05 21:24:10.669495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.373 [2024-12-05 21:24:10.669504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.373 [2024-12-05 21:24:10.669512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.373 [2024-12-05 21:24:10.669520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.373 [2024-12-05 21:24:10.682231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.373 [2024-12-05 21:24:10.682908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.373 [2024-12-05 21:24:10.682947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.373 [2024-12-05 21:24:10.682959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.373 [2024-12-05 21:24:10.683198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.373 [2024-12-05 21:24:10.683420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.373 [2024-12-05 21:24:10.683429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.373 [2024-12-05 21:24:10.683441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.373 [2024-12-05 21:24:10.683449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.373 [2024-12-05 21:24:10.696151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.373 [2024-12-05 21:24:10.696744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.373 [2024-12-05 21:24:10.696763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.373 [2024-12-05 21:24:10.696771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.373 [2024-12-05 21:24:10.696995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.373 [2024-12-05 21:24:10.697213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.373 [2024-12-05 21:24:10.697222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.373 [2024-12-05 21:24:10.697229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.373 [2024-12-05 21:24:10.697236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.373 [2024-12-05 21:24:10.710122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.373 [2024-12-05 21:24:10.710652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.373 [2024-12-05 21:24:10.710669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.373 [2024-12-05 21:24:10.710677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.373 [2024-12-05 21:24:10.710900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.373 [2024-12-05 21:24:10.711118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.373 [2024-12-05 21:24:10.711126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.373 [2024-12-05 21:24:10.711133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.373 [2024-12-05 21:24:10.711140] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.373 [2024-12-05 21:24:10.723960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.373 [2024-12-05 21:24:10.724516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.373 [2024-12-05 21:24:10.724554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.373 [2024-12-05 21:24:10.724565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.373 [2024-12-05 21:24:10.724803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.373 [2024-12-05 21:24:10.725034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.373 [2024-12-05 21:24:10.725044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.373 [2024-12-05 21:24:10.725051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.373 [2024-12-05 21:24:10.725059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.373 [2024-12-05 21:24:10.737752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.373 [2024-12-05 21:24:10.738379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.373 [2024-12-05 21:24:10.738417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.373 [2024-12-05 21:24:10.738428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.373 [2024-12-05 21:24:10.738666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.373 [2024-12-05 21:24:10.738896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.373 [2024-12-05 21:24:10.738906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.373 [2024-12-05 21:24:10.738914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.373 [2024-12-05 21:24:10.738922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.373 [2024-12-05 21:24:10.751611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.373 [2024-12-05 21:24:10.752188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.373 [2024-12-05 21:24:10.752208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.373 [2024-12-05 21:24:10.752216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.373 [2024-12-05 21:24:10.752434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.373 [2024-12-05 21:24:10.752652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.373 [2024-12-05 21:24:10.752660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.374 [2024-12-05 21:24:10.752668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.374 [2024-12-05 21:24:10.752674] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.374 [2024-12-05 21:24:10.765587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.374 [2024-12-05 21:24:10.766132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.374 [2024-12-05 21:24:10.766150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.374 [2024-12-05 21:24:10.766157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.374 [2024-12-05 21:24:10.766375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.374 [2024-12-05 21:24:10.766592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.374 [2024-12-05 21:24:10.766601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.374 [2024-12-05 21:24:10.766609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.374 [2024-12-05 21:24:10.766615] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.374 [2024-12-05 21:24:10.779539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.374 [2024-12-05 21:24:10.780058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.374 [2024-12-05 21:24:10.780075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.374 [2024-12-05 21:24:10.780086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.374 [2024-12-05 21:24:10.780304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.374 [2024-12-05 21:24:10.780522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.374 [2024-12-05 21:24:10.780530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.374 [2024-12-05 21:24:10.780537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.374 [2024-12-05 21:24:10.780544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.374 [2024-12-05 21:24:10.793440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.374 [2024-12-05 21:24:10.794158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.374 [2024-12-05 21:24:10.794197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.374 [2024-12-05 21:24:10.794208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.374 [2024-12-05 21:24:10.794445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.374 [2024-12-05 21:24:10.794667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.374 [2024-12-05 21:24:10.794676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.374 [2024-12-05 21:24:10.794684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.374 [2024-12-05 21:24:10.794691] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.678 [2024-12-05 21:24:10.807374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.678 [2024-12-05 21:24:10.807943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.678 [2024-12-05 21:24:10.807981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.678 [2024-12-05 21:24:10.807994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.678 [2024-12-05 21:24:10.808235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.678 [2024-12-05 21:24:10.808457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.678 [2024-12-05 21:24:10.808466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.678 [2024-12-05 21:24:10.808474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.678 [2024-12-05 21:24:10.808482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.678 [2024-12-05 21:24:10.821175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.678 [2024-12-05 21:24:10.821837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.678 [2024-12-05 21:24:10.821883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.678 [2024-12-05 21:24:10.821895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.678 [2024-12-05 21:24:10.822132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.678 [2024-12-05 21:24:10.822359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.678 [2024-12-05 21:24:10.822368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.678 [2024-12-05 21:24:10.822376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.678 [2024-12-05 21:24:10.822384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.678 [2024-12-05 21:24:10.835058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.678 [2024-12-05 21:24:10.835635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.678 [2024-12-05 21:24:10.835654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.678 [2024-12-05 21:24:10.835662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.678 [2024-12-05 21:24:10.835885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.678 [2024-12-05 21:24:10.836104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.678 [2024-12-05 21:24:10.836112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.678 [2024-12-05 21:24:10.836119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.678 [2024-12-05 21:24:10.836126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.678 [2024-12-05 21:24:10.849011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.678 [2024-12-05 21:24:10.849652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.678 [2024-12-05 21:24:10.849690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.678 [2024-12-05 21:24:10.849702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.678 [2024-12-05 21:24:10.849947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.678 [2024-12-05 21:24:10.850170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.678 [2024-12-05 21:24:10.850179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.678 [2024-12-05 21:24:10.850186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.678 [2024-12-05 21:24:10.850194] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.678 [2024-12-05 21:24:10.862869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.678 [2024-12-05 21:24:10.863425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.678 [2024-12-05 21:24:10.863463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.678 [2024-12-05 21:24:10.863473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.678 [2024-12-05 21:24:10.863711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.678 [2024-12-05 21:24:10.863942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.678 [2024-12-05 21:24:10.863952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.678 [2024-12-05 21:24:10.863965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.678 [2024-12-05 21:24:10.863972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.678 [2024-12-05 21:24:10.876667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.678 [2024-12-05 21:24:10.877328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.678 [2024-12-05 21:24:10.877366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.678 [2024-12-05 21:24:10.877377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.678 [2024-12-05 21:24:10.877615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.678 [2024-12-05 21:24:10.877847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.678 [2024-12-05 21:24:10.877858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.678 [2024-12-05 21:24:10.877875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.678 [2024-12-05 21:24:10.877884] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.678 [2024-12-05 21:24:10.890550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.679 [2024-12-05 21:24:10.891052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.679 [2024-12-05 21:24:10.891089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.679 [2024-12-05 21:24:10.891102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.679 [2024-12-05 21:24:10.891343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.679 [2024-12-05 21:24:10.891565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.679 [2024-12-05 21:24:10.891574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.679 [2024-12-05 21:24:10.891582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.679 [2024-12-05 21:24:10.891590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.679 [2024-12-05 21:24:10.904486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.679 [2024-12-05 21:24:10.905144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.679 [2024-12-05 21:24:10.905183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.679 [2024-12-05 21:24:10.905194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.679 [2024-12-05 21:24:10.905431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.679 [2024-12-05 21:24:10.905653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.679 [2024-12-05 21:24:10.905662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.679 [2024-12-05 21:24:10.905670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.679 [2024-12-05 21:24:10.905677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.679 [2024-12-05 21:24:10.918373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.679 [2024-12-05 21:24:10.919029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.679 [2024-12-05 21:24:10.919067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.679 [2024-12-05 21:24:10.919078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.679 [2024-12-05 21:24:10.919315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.679 [2024-12-05 21:24:10.919537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.679 [2024-12-05 21:24:10.919546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.679 [2024-12-05 21:24:10.919554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.679 [2024-12-05 21:24:10.919562] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.679 [2024-12-05 21:24:10.932253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.679 [2024-12-05 21:24:10.932918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.679 [2024-12-05 21:24:10.932957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.679 [2024-12-05 21:24:10.932967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.679 [2024-12-05 21:24:10.933205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.679 [2024-12-05 21:24:10.933427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.679 [2024-12-05 21:24:10.933436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.679 [2024-12-05 21:24:10.933444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.679 [2024-12-05 21:24:10.933452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.679 [2024-12-05 21:24:10.946145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.679 [2024-12-05 21:24:10.946807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.679 [2024-12-05 21:24:10.946845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.679 [2024-12-05 21:24:10.946856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.679 [2024-12-05 21:24:10.947103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.679 [2024-12-05 21:24:10.947325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.679 [2024-12-05 21:24:10.947334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.679 [2024-12-05 21:24:10.947342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.679 [2024-12-05 21:24:10.947349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.679 [2024-12-05 21:24:10.960034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.679 [2024-12-05 21:24:10.960648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.679 [2024-12-05 21:24:10.960687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.679 [2024-12-05 21:24:10.960707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.679 [2024-12-05 21:24:10.960954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.679 [2024-12-05 21:24:10.961177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.679 [2024-12-05 21:24:10.961185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.679 [2024-12-05 21:24:10.961193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.679 [2024-12-05 21:24:10.961201] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.679 [2024-12-05 21:24:10.973894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.679 [2024-12-05 21:24:10.974606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.679 [2024-12-05 21:24:10.974644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.679 [2024-12-05 21:24:10.974655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.679 [2024-12-05 21:24:10.974903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.679 [2024-12-05 21:24:10.975125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.679 [2024-12-05 21:24:10.975134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.679 [2024-12-05 21:24:10.975142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.679 [2024-12-05 21:24:10.975150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.679 [2024-12-05 21:24:10.987867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.679 [2024-12-05 21:24:10.988411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.679 [2024-12-05 21:24:10.988448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.679 [2024-12-05 21:24:10.988459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.679 [2024-12-05 21:24:10.988696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.679 [2024-12-05 21:24:10.988929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.679 [2024-12-05 21:24:10.988938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.679 [2024-12-05 21:24:10.988946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.679 [2024-12-05 21:24:10.988954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.679 [2024-12-05 21:24:11.001845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.680 [2024-12-05 21:24:11.002545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.680 [2024-12-05 21:24:11.002584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.680 [2024-12-05 21:24:11.002595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.680 [2024-12-05 21:24:11.002832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.680 [2024-12-05 21:24:11.003070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.680 [2024-12-05 21:24:11.003081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.680 [2024-12-05 21:24:11.003089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.680 [2024-12-05 21:24:11.003098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.680 [2024-12-05 21:24:11.015783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.680 [2024-12-05 21:24:11.016448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.680 [2024-12-05 21:24:11.016487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.680 [2024-12-05 21:24:11.016499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.680 [2024-12-05 21:24:11.016737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.680 [2024-12-05 21:24:11.016968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.680 [2024-12-05 21:24:11.016977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.680 [2024-12-05 21:24:11.016985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.680 [2024-12-05 21:24:11.016993] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.680 [2024-12-05 21:24:11.029675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.680 [2024-12-05 21:24:11.030140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.680 [2024-12-05 21:24:11.030159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.680 [2024-12-05 21:24:11.030167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.680 [2024-12-05 21:24:11.030385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.680 [2024-12-05 21:24:11.030603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.680 [2024-12-05 21:24:11.030612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.680 [2024-12-05 21:24:11.030619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.680 [2024-12-05 21:24:11.030625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.680 [2024-12-05 21:24:11.043526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.680 [2024-12-05 21:24:11.044045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.680 [2024-12-05 21:24:11.044062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.680 [2024-12-05 21:24:11.044069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.680 [2024-12-05 21:24:11.044287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.680 [2024-12-05 21:24:11.044504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.680 [2024-12-05 21:24:11.044511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.680 [2024-12-05 21:24:11.044523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.680 [2024-12-05 21:24:11.044529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.680 [2024-12-05 21:24:11.057426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.680 [2024-12-05 21:24:11.058105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.680 [2024-12-05 21:24:11.058142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.680 [2024-12-05 21:24:11.058154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.680 [2024-12-05 21:24:11.058391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.680 [2024-12-05 21:24:11.058613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.680 [2024-12-05 21:24:11.058622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.680 [2024-12-05 21:24:11.058630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.680 [2024-12-05 21:24:11.058638] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.680 [2024-12-05 21:24:11.071349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.680 [2024-12-05 21:24:11.071916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.680 [2024-12-05 21:24:11.071936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.680 [2024-12-05 21:24:11.071944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.680 [2024-12-05 21:24:11.072162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.680 [2024-12-05 21:24:11.072380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.680 [2024-12-05 21:24:11.072388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.680 [2024-12-05 21:24:11.072395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.680 [2024-12-05 21:24:11.072401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.680 [2024-12-05 21:24:11.085322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.680 [2024-12-05 21:24:11.085984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.680 [2024-12-05 21:24:11.086022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.680 [2024-12-05 21:24:11.086033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.680 [2024-12-05 21:24:11.086271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.680 [2024-12-05 21:24:11.086492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.680 [2024-12-05 21:24:11.086501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.680 [2024-12-05 21:24:11.086509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.680 [2024-12-05 21:24:11.086517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.680 [2024-12-05 21:24:11.099197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.680 [2024-12-05 21:24:11.099765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.680 [2024-12-05 21:24:11.099803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.680 [2024-12-05 21:24:11.099815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.680 [2024-12-05 21:24:11.100064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.680 [2024-12-05 21:24:11.100286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.680 [2024-12-05 21:24:11.100295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.680 [2024-12-05 21:24:11.100303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.681 [2024-12-05 21:24:11.100311] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.943 [2024-12-05 21:24:11.112986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.943 [2024-12-05 21:24:11.113542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.943 [2024-12-05 21:24:11.113579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.944 [2024-12-05 21:24:11.113592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.944 [2024-12-05 21:24:11.113833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.944 [2024-12-05 21:24:11.114063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.944 [2024-12-05 21:24:11.114073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.944 [2024-12-05 21:24:11.114081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.944 [2024-12-05 21:24:11.114089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.944 [2024-12-05 21:24:11.126778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.944 [2024-12-05 21:24:11.127442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.944 [2024-12-05 21:24:11.127480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.944 [2024-12-05 21:24:11.127491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.944 [2024-12-05 21:24:11.127729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.944 [2024-12-05 21:24:11.127960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.944 [2024-12-05 21:24:11.127970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.944 [2024-12-05 21:24:11.127978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.944 [2024-12-05 21:24:11.127986] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.944 [2024-12-05 21:24:11.140667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.944 [2024-12-05 21:24:11.141309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.944 [2024-12-05 21:24:11.141348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.944 [2024-12-05 21:24:11.141363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.944 [2024-12-05 21:24:11.141601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.944 [2024-12-05 21:24:11.141824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.944 [2024-12-05 21:24:11.141833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.944 [2024-12-05 21:24:11.141841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.944 [2024-12-05 21:24:11.141849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.944 [2024-12-05 21:24:11.154536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.944 [2024-12-05 21:24:11.155182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.944 [2024-12-05 21:24:11.155220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.944 [2024-12-05 21:24:11.155231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.944 [2024-12-05 21:24:11.155468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.944 [2024-12-05 21:24:11.155690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.944 [2024-12-05 21:24:11.155699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.944 [2024-12-05 21:24:11.155707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.944 [2024-12-05 21:24:11.155714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.944 [2024-12-05 21:24:11.168416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.944 [2024-12-05 21:24:11.168960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.944 [2024-12-05 21:24:11.168998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.944 [2024-12-05 21:24:11.169011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.944 [2024-12-05 21:24:11.169250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.944 [2024-12-05 21:24:11.169471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.944 [2024-12-05 21:24:11.169480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.944 [2024-12-05 21:24:11.169489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.944 [2024-12-05 21:24:11.169497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.944 [2024-12-05 21:24:11.182197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.944 [2024-12-05 21:24:11.182783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.944 [2024-12-05 21:24:11.182802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.944 [2024-12-05 21:24:11.182810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.944 [2024-12-05 21:24:11.183034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.944 [2024-12-05 21:24:11.183258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.944 [2024-12-05 21:24:11.183266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.944 [2024-12-05 21:24:11.183273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.944 [2024-12-05 21:24:11.183280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.944 [2024-12-05 21:24:11.196148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.944 [2024-12-05 21:24:11.196700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.944 [2024-12-05 21:24:11.196717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.944 [2024-12-05 21:24:11.196724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.944 [2024-12-05 21:24:11.196948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.944 [2024-12-05 21:24:11.197167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.944 [2024-12-05 21:24:11.197175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.944 [2024-12-05 21:24:11.197182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.944 [2024-12-05 21:24:11.197189] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.944 [2024-12-05 21:24:11.210065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.944 [2024-12-05 21:24:11.210645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.944 [2024-12-05 21:24:11.210662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.944 [2024-12-05 21:24:11.210669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.944 [2024-12-05 21:24:11.210892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.944 [2024-12-05 21:24:11.211111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.945 [2024-12-05 21:24:11.211119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.945 [2024-12-05 21:24:11.211126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.945 [2024-12-05 21:24:11.211132] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.945 [2024-12-05 21:24:11.224001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.945 [2024-12-05 21:24:11.224576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.945 [2024-12-05 21:24:11.224592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.945 [2024-12-05 21:24:11.224599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.945 [2024-12-05 21:24:11.224816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.945 [2024-12-05 21:24:11.225039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.945 [2024-12-05 21:24:11.225047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.945 [2024-12-05 21:24:11.225054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.945 [2024-12-05 21:24:11.225064] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2294721 Killed "${NVMF_APP[@]}" "$@" 00:31:09.945 21:24:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:31:09.945 21:24:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:09.945 21:24:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:09.945 21:24:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:09.945 21:24:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:09.945 [2024-12-05 21:24:11.237944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.945 [2024-12-05 21:24:11.238503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.945 [2024-12-05 21:24:11.238519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.945 [2024-12-05 21:24:11.238526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.945 [2024-12-05 21:24:11.238743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.945 [2024-12-05 21:24:11.238967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.945 [2024-12-05 21:24:11.238975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.945 [2024-12-05 21:24:11.238982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.945 [2024-12-05 21:24:11.238989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.945 21:24:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2296711 00:31:09.945 21:24:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2296711 00:31:09.945 21:24:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:09.945 21:24:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2296711 ']' 00:31:09.945 21:24:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.945 21:24:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:09.945 21:24:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.945 21:24:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:09.945 21:24:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:09.945 [2024-12-05 21:24:11.251867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.945 [2024-12-05 21:24:11.252358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.945 [2024-12-05 21:24:11.252394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.945 [2024-12-05 21:24:11.252405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.945 [2024-12-05 21:24:11.252643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.945 [2024-12-05 21:24:11.252874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.945 [2024-12-05 21:24:11.252884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.945 [2024-12-05 21:24:11.252898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.945 [2024-12-05 21:24:11.252906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.945 [2024-12-05 21:24:11.265803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.945 [2024-12-05 21:24:11.266482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.945 [2024-12-05 21:24:11.266521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.945 [2024-12-05 21:24:11.266532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.945 [2024-12-05 21:24:11.266770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.945 [2024-12-05 21:24:11.267000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.945 [2024-12-05 21:24:11.267011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.945 [2024-12-05 21:24:11.267020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.945 [2024-12-05 21:24:11.267028] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.945 [2024-12-05 21:24:11.279733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.945 [2024-12-05 21:24:11.280418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.945 [2024-12-05 21:24:11.280457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.945 [2024-12-05 21:24:11.280469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.945 [2024-12-05 21:24:11.280710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.945 [2024-12-05 21:24:11.280939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.945 [2024-12-05 21:24:11.280949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.945 [2024-12-05 21:24:11.280957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.945 [2024-12-05 21:24:11.280965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.945 [2024-12-05 21:24:11.293652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.945 [2024-12-05 21:24:11.294165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.945 [2024-12-05 21:24:11.294185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.945 [2024-12-05 21:24:11.294193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.945 [2024-12-05 21:24:11.294412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.945 [2024-12-05 21:24:11.294630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.945 [2024-12-05 21:24:11.294639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.945 [2024-12-05 21:24:11.294647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.945 [2024-12-05 21:24:11.294653] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.946 [2024-12-05 21:24:11.295828] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:31:09.946 [2024-12-05 21:24:11.295878] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.946 [2024-12-05 21:24:11.307539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.946 [2024-12-05 21:24:11.308191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.946 [2024-12-05 21:24:11.308229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.946 [2024-12-05 21:24:11.308241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.946 [2024-12-05 21:24:11.308479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.946 [2024-12-05 21:24:11.308700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.946 [2024-12-05 21:24:11.308709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.946 [2024-12-05 21:24:11.308717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.946 [2024-12-05 21:24:11.308725] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.946 [2024-12-05 21:24:11.321421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.946 [2024-12-05 21:24:11.322197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.946 [2024-12-05 21:24:11.322235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.946 [2024-12-05 21:24:11.322247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.946 [2024-12-05 21:24:11.322485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.946 [2024-12-05 21:24:11.322707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.946 [2024-12-05 21:24:11.322716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.946 [2024-12-05 21:24:11.322723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.946 [2024-12-05 21:24:11.322731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.946 [2024-12-05 21:24:11.335317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.946 [2024-12-05 21:24:11.335887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.946 [2024-12-05 21:24:11.335908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.946 [2024-12-05 21:24:11.335916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.946 [2024-12-05 21:24:11.336135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.946 [2024-12-05 21:24:11.336353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.946 [2024-12-05 21:24:11.336361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.946 [2024-12-05 21:24:11.336369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.946 [2024-12-05 21:24:11.336376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.946 [2024-12-05 21:24:11.349295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.946 [2024-12-05 21:24:11.349942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.946 [2024-12-05 21:24:11.349981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.946 [2024-12-05 21:24:11.349992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.946 [2024-12-05 21:24:11.350230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.946 [2024-12-05 21:24:11.350452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.946 [2024-12-05 21:24:11.350462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.946 [2024-12-05 21:24:11.350469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.946 [2024-12-05 21:24:11.350477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.946 [2024-12-05 21:24:11.363175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:09.946 [2024-12-05 21:24:11.363733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.946 [2024-12-05 21:24:11.363752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:09.946 [2024-12-05 21:24:11.363760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:09.946 [2024-12-05 21:24:11.363985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:09.946 [2024-12-05 21:24:11.364204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:09.946 [2024-12-05 21:24:11.364212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:09.946 [2024-12-05 21:24:11.364219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:09.946 [2024-12-05 21:24:11.364226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:09.946 [2024-12-05 21:24:11.377122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.209 [2024-12-05 21:24:11.377797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.209 [2024-12-05 21:24:11.377836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.209 [2024-12-05 21:24:11.377847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.209 [2024-12-05 21:24:11.378092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.209 [2024-12-05 21:24:11.378315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.209 [2024-12-05 21:24:11.378324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.209 [2024-12-05 21:24:11.378332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.209 [2024-12-05 21:24:11.378340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.209 [2024-12-05 21:24:11.391045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.209 [2024-12-05 21:24:11.391601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.209 [2024-12-05 21:24:11.391643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.209 [2024-12-05 21:24:11.391656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.209 [2024-12-05 21:24:11.391905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.209 [2024-12-05 21:24:11.392127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.209 [2024-12-05 21:24:11.392137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.209 [2024-12-05 21:24:11.392145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.209 [2024-12-05 21:24:11.392154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.209 [2024-12-05 21:24:11.392957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:10.209 [2024-12-05 21:24:11.404853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.209 [2024-12-05 21:24:11.405440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.209 [2024-12-05 21:24:11.405460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.209 [2024-12-05 21:24:11.405469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.209 [2024-12-05 21:24:11.405688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.209 [2024-12-05 21:24:11.405913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.209 [2024-12-05 21:24:11.405922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.209 [2024-12-05 21:24:11.405930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.209 [2024-12-05 21:24:11.405937] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.209 [2024-12-05 21:24:11.418825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.209 [2024-12-05 21:24:11.419512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.209 [2024-12-05 21:24:11.419552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.209 [2024-12-05 21:24:11.419563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.209 [2024-12-05 21:24:11.419803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.209 [2024-12-05 21:24:11.420033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.209 [2024-12-05 21:24:11.420043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.209 [2024-12-05 21:24:11.420051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.209 [2024-12-05 21:24:11.420060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.209 [2024-12-05 21:24:11.422272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:10.209 [2024-12-05 21:24:11.422295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:10.209 [2024-12-05 21:24:11.422301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:10.209 [2024-12-05 21:24:11.422307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:10.209 [2024-12-05 21:24:11.422315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:10.209 [2024-12-05 21:24:11.423529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:10.209 [2024-12-05 21:24:11.423685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.209 [2024-12-05 21:24:11.423686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:10.209 [2024-12-05 21:24:11.432754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.209 [2024-12-05 21:24:11.433344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.209 [2024-12-05 21:24:11.433364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.210 [2024-12-05 21:24:11.433373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.210 [2024-12-05 21:24:11.433592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.210 [2024-12-05 21:24:11.433809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.210 [2024-12-05 21:24:11.433818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.210 [2024-12-05 21:24:11.433826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.210 [2024-12-05 21:24:11.433832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.210 [2024-12-05 21:24:11.446730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.210 [2024-12-05 21:24:11.447441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.210 [2024-12-05 21:24:11.447482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.210 [2024-12-05 21:24:11.447493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.210 [2024-12-05 21:24:11.447734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.210 [2024-12-05 21:24:11.447964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.210 [2024-12-05 21:24:11.447975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.210 [2024-12-05 21:24:11.447983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.210 [2024-12-05 21:24:11.447991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.210 [2024-12-05 21:24:11.460673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.210 [2024-12-05 21:24:11.461373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.210 [2024-12-05 21:24:11.461413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.210 [2024-12-05 21:24:11.461424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.210 [2024-12-05 21:24:11.461663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.210 [2024-12-05 21:24:11.461895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.210 [2024-12-05 21:24:11.461905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.210 [2024-12-05 21:24:11.461913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.210 [2024-12-05 21:24:11.461921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.210 [2024-12-05 21:24:11.474612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.210 [2024-12-05 21:24:11.475300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.210 [2024-12-05 21:24:11.475339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.210 [2024-12-05 21:24:11.475350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.210 [2024-12-05 21:24:11.475589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.210 [2024-12-05 21:24:11.475811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.210 [2024-12-05 21:24:11.475820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.210 [2024-12-05 21:24:11.475828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.210 [2024-12-05 21:24:11.475836] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.210 [2024-12-05 21:24:11.488563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.210 [2024-12-05 21:24:11.489120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.210 [2024-12-05 21:24:11.489159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.210 [2024-12-05 21:24:11.489170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.210 [2024-12-05 21:24:11.489407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.210 [2024-12-05 21:24:11.489629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.210 [2024-12-05 21:24:11.489638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.210 [2024-12-05 21:24:11.489646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.210 [2024-12-05 21:24:11.489654] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.210 [2024-12-05 21:24:11.502348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.210 [2024-12-05 21:24:11.502784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.210 [2024-12-05 21:24:11.502802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.210 [2024-12-05 21:24:11.502810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.210 [2024-12-05 21:24:11.503034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.210 [2024-12-05 21:24:11.503253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.210 [2024-12-05 21:24:11.503262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.210 [2024-12-05 21:24:11.503269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.210 [2024-12-05 21:24:11.503276] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.210 [2024-12-05 21:24:11.516156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.210 [2024-12-05 21:24:11.516573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.210 [2024-12-05 21:24:11.516594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.210 [2024-12-05 21:24:11.516602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.210 [2024-12-05 21:24:11.516819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.210 [2024-12-05 21:24:11.517044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.210 [2024-12-05 21:24:11.517053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.210 [2024-12-05 21:24:11.517061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.210 [2024-12-05 21:24:11.517069] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.210 [2024-12-05 21:24:11.529944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.210 [2024-12-05 21:24:11.530491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.210 [2024-12-05 21:24:11.530507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.210 [2024-12-05 21:24:11.530515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.210 [2024-12-05 21:24:11.530733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.210 [2024-12-05 21:24:11.530956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.210 [2024-12-05 21:24:11.530964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.211 [2024-12-05 21:24:11.530971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.211 [2024-12-05 21:24:11.530978] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.211 [2024-12-05 21:24:11.543850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.211 [2024-12-05 21:24:11.544372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.211 [2024-12-05 21:24:11.544411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.211 [2024-12-05 21:24:11.544422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.211 [2024-12-05 21:24:11.544659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.211 [2024-12-05 21:24:11.544890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.211 [2024-12-05 21:24:11.544901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.211 [2024-12-05 21:24:11.544908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.211 [2024-12-05 21:24:11.544916] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.211 [2024-12-05 21:24:11.557827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.211 [2024-12-05 21:24:11.558534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.211 [2024-12-05 21:24:11.558571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.211 [2024-12-05 21:24:11.558582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.211 [2024-12-05 21:24:11.558825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.211 [2024-12-05 21:24:11.559054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.211 [2024-12-05 21:24:11.559064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.211 [2024-12-05 21:24:11.559073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.211 [2024-12-05 21:24:11.559080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.211 [2024-12-05 21:24:11.571758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.211 [2024-12-05 21:24:11.572315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.211 [2024-12-05 21:24:11.572354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.211 [2024-12-05 21:24:11.572365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.211 [2024-12-05 21:24:11.572603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.211 [2024-12-05 21:24:11.572825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.211 [2024-12-05 21:24:11.572834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.211 [2024-12-05 21:24:11.572841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.211 [2024-12-05 21:24:11.572849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.211 [2024-12-05 21:24:11.585564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.211 [2024-12-05 21:24:11.586222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.211 [2024-12-05 21:24:11.586260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.211 [2024-12-05 21:24:11.586271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.211 [2024-12-05 21:24:11.586509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.211 [2024-12-05 21:24:11.586732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.211 [2024-12-05 21:24:11.586741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.211 [2024-12-05 21:24:11.586749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.211 [2024-12-05 21:24:11.586757] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.211 [2024-12-05 21:24:11.599446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.211 [2024-12-05 21:24:11.599898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.211 [2024-12-05 21:24:11.599922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.211 [2024-12-05 21:24:11.599931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.211 [2024-12-05 21:24:11.600152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.211 [2024-12-05 21:24:11.600371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.211 [2024-12-05 21:24:11.600379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.211 [2024-12-05 21:24:11.600391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.211 [2024-12-05 21:24:11.600398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.211 [2024-12-05 21:24:11.613289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.211 [2024-12-05 21:24:11.613908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.211 [2024-12-05 21:24:11.613933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.211 [2024-12-05 21:24:11.613941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.211 [2024-12-05 21:24:11.614163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.211 [2024-12-05 21:24:11.614382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.211 [2024-12-05 21:24:11.614391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.211 [2024-12-05 21:24:11.614398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.211 [2024-12-05 21:24:11.614405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.211 [2024-12-05 21:24:11.627085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.211 [2024-12-05 21:24:11.627609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.211 [2024-12-05 21:24:11.627646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.211 [2024-12-05 21:24:11.627657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.211 [2024-12-05 21:24:11.627903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.211 [2024-12-05 21:24:11.628126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.211 [2024-12-05 21:24:11.628134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.211 [2024-12-05 21:24:11.628142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.211 [2024-12-05 21:24:11.628150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.211 [2024-12-05 21:24:11.641044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.211 [2024-12-05 21:24:11.641755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.211 [2024-12-05 21:24:11.641793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.211 [2024-12-05 21:24:11.641805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.211 [2024-12-05 21:24:11.642051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.211 [2024-12-05 21:24:11.642273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.212 [2024-12-05 21:24:11.642282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.212 [2024-12-05 21:24:11.642290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.212 [2024-12-05 21:24:11.642298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.473 [2024-12-05 21:24:11.655000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.473 [2024-12-05 21:24:11.655437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.473 [2024-12-05 21:24:11.655457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.473 [2024-12-05 21:24:11.655466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.473 [2024-12-05 21:24:11.655685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.473 [2024-12-05 21:24:11.655910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.474 [2024-12-05 21:24:11.655920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.474 [2024-12-05 21:24:11.655927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.474 [2024-12-05 21:24:11.655934] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.474 [2024-12-05 21:24:11.670466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.474 4859.50 IOPS, 18.98 MiB/s [2024-12-05T20:24:11.911Z] [2024-12-05 21:24:11.671164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.474 [2024-12-05 21:24:11.671202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.474 [2024-12-05 21:24:11.671213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.474 [2024-12-05 21:24:11.671451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.474 [2024-12-05 21:24:11.671673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.474 [2024-12-05 21:24:11.671683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.474 [2024-12-05 21:24:11.671690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.474 [2024-12-05 21:24:11.671698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.474 [2024-12-05 21:24:11.684444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.474 [2024-12-05 21:24:11.684889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.474 [2024-12-05 21:24:11.684909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.474 [2024-12-05 21:24:11.684917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.474 [2024-12-05 21:24:11.685137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.474 [2024-12-05 21:24:11.685355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.474 [2024-12-05 21:24:11.685362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.474 [2024-12-05 21:24:11.685370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.474 [2024-12-05 21:24:11.685376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.474 [2024-12-05 21:24:11.698260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.474 [2024-12-05 21:24:11.698806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.474 [2024-12-05 21:24:11.698848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.474 [2024-12-05 21:24:11.698860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.474 [2024-12-05 21:24:11.699107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.474 [2024-12-05 21:24:11.699329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.474 [2024-12-05 21:24:11.699338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.474 [2024-12-05 21:24:11.699346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.474 [2024-12-05 21:24:11.699354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.474 [2024-12-05 21:24:11.712036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.474 [2024-12-05 21:24:11.712633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.474 [2024-12-05 21:24:11.712652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.474 [2024-12-05 21:24:11.712659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.474 [2024-12-05 21:24:11.712884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.474 [2024-12-05 21:24:11.713103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.474 [2024-12-05 21:24:11.713111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.474 [2024-12-05 21:24:11.713118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.474 [2024-12-05 21:24:11.713125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.474 [2024-12-05 21:24:11.726008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.474 [2024-12-05 21:24:11.726488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.474 [2024-12-05 21:24:11.726505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.474 [2024-12-05 21:24:11.726513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.474 [2024-12-05 21:24:11.726731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.474 [2024-12-05 21:24:11.726955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.474 [2024-12-05 21:24:11.726964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.474 [2024-12-05 21:24:11.726971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.474 [2024-12-05 21:24:11.726978] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.474 [2024-12-05 21:24:11.739851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.474 [2024-12-05 21:24:11.740262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.474 [2024-12-05 21:24:11.740281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.474 [2024-12-05 21:24:11.740289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.474 [2024-12-05 21:24:11.740511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.474 [2024-12-05 21:24:11.740729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.474 [2024-12-05 21:24:11.740738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.474 [2024-12-05 21:24:11.740745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.474 [2024-12-05 21:24:11.740751] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.474 [2024-12-05 21:24:11.753634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.474 [2024-12-05 21:24:11.754198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.474 [2024-12-05 21:24:11.754217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.474 [2024-12-05 21:24:11.754225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.474 [2024-12-05 21:24:11.754444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.474 [2024-12-05 21:24:11.754663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.474 [2024-12-05 21:24:11.754673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.474 [2024-12-05 21:24:11.754681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.474 [2024-12-05 21:24:11.754688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.474 [2024-12-05 21:24:11.767578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.474 [2024-12-05 21:24:11.768085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.474 [2024-12-05 21:24:11.768103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.474 [2024-12-05 21:24:11.768110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.474 [2024-12-05 21:24:11.768328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.474 [2024-12-05 21:24:11.768545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.474 [2024-12-05 21:24:11.768554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.474 [2024-12-05 21:24:11.768561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.474 [2024-12-05 21:24:11.768567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.474 [2024-12-05 21:24:11.781473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.474 [2024-12-05 21:24:11.782171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.474 [2024-12-05 21:24:11.782209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.474 [2024-12-05 21:24:11.782221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.474 [2024-12-05 21:24:11.782458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.474 [2024-12-05 21:24:11.782680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.474 [2024-12-05 21:24:11.782690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.474 [2024-12-05 21:24:11.782702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.474 [2024-12-05 21:24:11.782710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.474 [2024-12-05 21:24:11.795399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.474 [2024-12-05 21:24:11.796110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.474 [2024-12-05 21:24:11.796149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.474 [2024-12-05 21:24:11.796160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.475 [2024-12-05 21:24:11.796397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.475 [2024-12-05 21:24:11.796619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.475 [2024-12-05 21:24:11.796628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.475 [2024-12-05 21:24:11.796636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.475 [2024-12-05 21:24:11.796644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.475 [2024-12-05 21:24:11.809334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.475 [2024-12-05 21:24:11.809970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.475 [2024-12-05 21:24:11.810008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.475 [2024-12-05 21:24:11.810021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.475 [2024-12-05 21:24:11.810262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.475 [2024-12-05 21:24:11.810485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.475 [2024-12-05 21:24:11.810494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.475 [2024-12-05 21:24:11.810501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.475 [2024-12-05 21:24:11.810509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.475 [2024-12-05 21:24:11.823205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.475 [2024-12-05 21:24:11.823898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.475 [2024-12-05 21:24:11.823937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.475 [2024-12-05 21:24:11.823949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.475 [2024-12-05 21:24:11.824189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.475 [2024-12-05 21:24:11.824410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.475 [2024-12-05 21:24:11.824420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.475 [2024-12-05 21:24:11.824427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.475 [2024-12-05 21:24:11.824435] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.475 [2024-12-05 21:24:11.837139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.475 [2024-12-05 21:24:11.837828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.475 [2024-12-05 21:24:11.837874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.475 [2024-12-05 21:24:11.837886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.475 [2024-12-05 21:24:11.838123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.475 [2024-12-05 21:24:11.838345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.475 [2024-12-05 21:24:11.838354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.475 [2024-12-05 21:24:11.838362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.475 [2024-12-05 21:24:11.838370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.475 [2024-12-05 21:24:11.851060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.475 [2024-12-05 21:24:11.851624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.475 [2024-12-05 21:24:11.851662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.475 [2024-12-05 21:24:11.851674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.475 [2024-12-05 21:24:11.851923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.475 [2024-12-05 21:24:11.852147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.475 [2024-12-05 21:24:11.852156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.475 [2024-12-05 21:24:11.852164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.475 [2024-12-05 21:24:11.852172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.475 [2024-12-05 21:24:11.864855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.475 [2024-12-05 21:24:11.865443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.475 [2024-12-05 21:24:11.865463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.475 [2024-12-05 21:24:11.865471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.475 [2024-12-05 21:24:11.865689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.475 [2024-12-05 21:24:11.865915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.475 [2024-12-05 21:24:11.865924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.475 [2024-12-05 21:24:11.865931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.475 [2024-12-05 21:24:11.865938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.475 [2024-12-05 21:24:11.878825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.475 [2024-12-05 21:24:11.879406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.475 [2024-12-05 21:24:11.879445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.475 [2024-12-05 21:24:11.879462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.475 [2024-12-05 21:24:11.879702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.475 [2024-12-05 21:24:11.879931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.475 [2024-12-05 21:24:11.879941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.475 [2024-12-05 21:24:11.879949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.475 [2024-12-05 21:24:11.879957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.475 [2024-12-05 21:24:11.892655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.475 [2024-12-05 21:24:11.893325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.475 [2024-12-05 21:24:11.893364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.475 [2024-12-05 21:24:11.893375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.475 [2024-12-05 21:24:11.893612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.475 [2024-12-05 21:24:11.893835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.475 [2024-12-05 21:24:11.893845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.475 [2024-12-05 21:24:11.893853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.475 [2024-12-05 21:24:11.893860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.475 [2024-12-05 21:24:11.906551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.475 [2024-12-05 21:24:11.907256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.475 [2024-12-05 21:24:11.907294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.475 [2024-12-05 21:24:11.907305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.475 [2024-12-05 21:24:11.907543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.475 [2024-12-05 21:24:11.907765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.475 [2024-12-05 21:24:11.907774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.475 [2024-12-05 21:24:11.907782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.475 [2024-12-05 21:24:11.907790] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.736 [2024-12-05 21:24:11.920484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.736 [2024-12-05 21:24:11.920788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-12-05 21:24:11.920807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.736 [2024-12-05 21:24:11.920815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.736 [2024-12-05 21:24:11.921039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.736 [2024-12-05 21:24:11.921263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.736 [2024-12-05 21:24:11.921271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.736 [2024-12-05 21:24:11.921278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.736 [2024-12-05 21:24:11.921285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.736 [2024-12-05 21:24:11.934373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.736 [2024-12-05 21:24:11.934930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.736 [2024-12-05 21:24:11.934948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.736 [2024-12-05 21:24:11.934956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.736 [2024-12-05 21:24:11.935173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.736 [2024-12-05 21:24:11.935391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.736 [2024-12-05 21:24:11.935400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.736 [2024-12-05 21:24:11.935406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.737 [2024-12-05 21:24:11.935413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.737 [2024-12-05 21:24:11.948294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.737 [2024-12-05 21:24:11.948715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-12-05 21:24:11.948731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.737 [2024-12-05 21:24:11.948739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.737 [2024-12-05 21:24:11.948961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.737 [2024-12-05 21:24:11.949179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.737 [2024-12-05 21:24:11.949189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.737 [2024-12-05 21:24:11.949196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.737 [2024-12-05 21:24:11.949202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.737 [2024-12-05 21:24:11.962077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.737 [2024-12-05 21:24:11.962580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-12-05 21:24:11.962596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.737 [2024-12-05 21:24:11.962604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.737 [2024-12-05 21:24:11.962821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.737 [2024-12-05 21:24:11.963046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.737 [2024-12-05 21:24:11.963055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.737 [2024-12-05 21:24:11.963066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.737 [2024-12-05 21:24:11.963073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.737 [2024-12-05 21:24:11.975957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.737 [2024-12-05 21:24:11.976462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-12-05 21:24:11.976500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.737 [2024-12-05 21:24:11.976511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.737 [2024-12-05 21:24:11.976749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.737 [2024-12-05 21:24:11.976978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.737 [2024-12-05 21:24:11.976988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.737 [2024-12-05 21:24:11.976996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.737 [2024-12-05 21:24:11.977003] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.737 [2024-12-05 21:24:11.989918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.737 [2024-12-05 21:24:11.990517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-12-05 21:24:11.990536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.737 [2024-12-05 21:24:11.990544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.737 [2024-12-05 21:24:11.990763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.737 [2024-12-05 21:24:11.990987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.737 [2024-12-05 21:24:11.990996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.737 [2024-12-05 21:24:11.991003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.737 [2024-12-05 21:24:11.991010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.737 [2024-12-05 21:24:12.003685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.737 [2024-12-05 21:24:12.004281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-12-05 21:24:12.004298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.737 [2024-12-05 21:24:12.004306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.737 [2024-12-05 21:24:12.004524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.737 [2024-12-05 21:24:12.004743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.737 [2024-12-05 21:24:12.004752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.737 [2024-12-05 21:24:12.004760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.737 [2024-12-05 21:24:12.004767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.737 [2024-12-05 21:24:12.017652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.737 [2024-12-05 21:24:12.018168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-12-05 21:24:12.018185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.737 [2024-12-05 21:24:12.018193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.737 [2024-12-05 21:24:12.018410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.737 [2024-12-05 21:24:12.018628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.737 [2024-12-05 21:24:12.018636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.737 [2024-12-05 21:24:12.018643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.737 [2024-12-05 21:24:12.018650] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.737 [2024-12-05 21:24:12.031529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.737 [2024-12-05 21:24:12.032091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-12-05 21:24:12.032130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.737 [2024-12-05 21:24:12.032143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.737 [2024-12-05 21:24:12.032382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.737 [2024-12-05 21:24:12.032604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.737 [2024-12-05 21:24:12.032614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.737 [2024-12-05 21:24:12.032621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.737 [2024-12-05 21:24:12.032629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.737 [2024-12-05 21:24:12.045315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.737 [2024-12-05 21:24:12.045922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-12-05 21:24:12.045942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.737 [2024-12-05 21:24:12.045950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.737 [2024-12-05 21:24:12.046169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.737 [2024-12-05 21:24:12.046388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.737 [2024-12-05 21:24:12.046396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.737 [2024-12-05 21:24:12.046403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.737 [2024-12-05 21:24:12.046410] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.737 [2024-12-05 21:24:12.059082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.737 [2024-12-05 21:24:12.059659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-12-05 21:24:12.059696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.737 [2024-12-05 21:24:12.059712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.737 [2024-12-05 21:24:12.059958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.737 [2024-12-05 21:24:12.060181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.737 [2024-12-05 21:24:12.060189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.737 [2024-12-05 21:24:12.060197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.737 [2024-12-05 21:24:12.060205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.737 [2024-12-05 21:24:12.072885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.737 [2024-12-05 21:24:12.073440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.737 [2024-12-05 21:24:12.073478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.737 [2024-12-05 21:24:12.073489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.737 [2024-12-05 21:24:12.073727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.737 [2024-12-05 21:24:12.073958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.737 [2024-12-05 21:24:12.073969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.737 [2024-12-05 21:24:12.073976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.738 [2024-12-05 21:24:12.073984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.738 [2024-12-05 21:24:12.086688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.738 [2024-12-05 21:24:12.087351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-12-05 21:24:12.087389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.738 [2024-12-05 21:24:12.087400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.738 [2024-12-05 21:24:12.087637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.738 [2024-12-05 21:24:12.087859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.738 [2024-12-05 21:24:12.087876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.738 [2024-12-05 21:24:12.087884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.738 [2024-12-05 21:24:12.087892] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.738 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:10.738 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:10.738 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:10.738 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:10.738 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:10.738 [2024-12-05 21:24:12.100574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.738 [2024-12-05 21:24:12.101120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-12-05 21:24:12.101162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.738 [2024-12-05 21:24:12.101174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.738 [2024-12-05 21:24:12.101411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.738 [2024-12-05 21:24:12.101633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.738 [2024-12-05 21:24:12.101642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.738 [2024-12-05 21:24:12.101650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.738 [2024-12-05 21:24:12.101658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.738 [2024-12-05 21:24:12.114554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.738 [2024-12-05 21:24:12.115070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-12-05 21:24:12.115109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.738 [2024-12-05 21:24:12.115121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.738 [2024-12-05 21:24:12.115358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.738 [2024-12-05 21:24:12.115580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.738 [2024-12-05 21:24:12.115590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.738 [2024-12-05 21:24:12.115597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.738 [2024-12-05 21:24:12.115605] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.738 [2024-12-05 21:24:12.128509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.738 [2024-12-05 21:24:12.129214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-12-05 21:24:12.129252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.738 [2024-12-05 21:24:12.129264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.738 [2024-12-05 21:24:12.129502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.738 [2024-12-05 21:24:12.129724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.738 [2024-12-05 21:24:12.129734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.738 [2024-12-05 21:24:12.129741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.738 [2024-12-05 21:24:12.129749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.738 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.738 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:10.738 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.738 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:10.738 [2024-12-05 21:24:12.140876] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:10.738 [2024-12-05 21:24:12.142441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.738 [2024-12-05 21:24:12.143132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-12-05 21:24:12.143170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.738 [2024-12-05 21:24:12.143181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.738 [2024-12-05 21:24:12.143419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.738 [2024-12-05 21:24:12.143641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.738 [2024-12-05 21:24:12.143650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.738 [2024-12-05 21:24:12.143658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.738 [2024-12-05 21:24:12.143666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.738 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.738 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:10.738 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.738 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:10.738 [2024-12-05 21:24:12.156353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.738 [2024-12-05 21:24:12.156950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.738 [2024-12-05 21:24:12.156969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.738 [2024-12-05 21:24:12.156977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.738 [2024-12-05 21:24:12.157196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.738 [2024-12-05 21:24:12.157414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.738 [2024-12-05 21:24:12.157422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.738 [2024-12-05 21:24:12.157429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.738 [2024-12-05 21:24:12.157437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.738 [2024-12-05 21:24:12.170315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.998 [2024-12-05 21:24:12.170873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.998 [2024-12-05 21:24:12.170892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.998 [2024-12-05 21:24:12.170899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.998 [2024-12-05 21:24:12.171117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.998 [2024-12-05 21:24:12.171335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.998 [2024-12-05 21:24:12.171343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.998 [2024-12-05 21:24:12.171350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.998 [2024-12-05 21:24:12.171357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.998 Malloc0 00:31:10.998 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.998 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:10.998 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.998 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:10.998 [2024-12-05 21:24:12.184263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.998 [2024-12-05 21:24:12.184684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.998 [2024-12-05 21:24:12.184700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.998 [2024-12-05 21:24:12.184709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.998 [2024-12-05 21:24:12.184932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.998 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.998 [2024-12-05 21:24:12.185151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.998 [2024-12-05 21:24:12.185159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.998 [2024-12-05 21:24:12.185166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.998 [2024-12-05 21:24:12.185173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.998 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:10.998 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.998 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:10.998 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.998 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:10.998 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.998 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:10.998 [2024-12-05 21:24:12.198126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.998 [2024-12-05 21:24:12.198781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.998 [2024-12-05 21:24:12.198821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea7780 with addr=10.0.0.2, port=4420 00:31:10.998 [2024-12-05 21:24:12.198832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7780 is same with the state(6) to be set 00:31:10.998 [2024-12-05 21:24:12.199078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7780 (9): Bad file descriptor 00:31:10.998 [2024-12-05 21:24:12.199301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:10.998 [2024-12-05 21:24:12.199310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:10.999 [2024-12-05 21:24:12.199318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:10.999 [2024-12-05 21:24:12.199326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:10.999 [2024-12-05 21:24:12.204167] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.999 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.999 21:24:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2295235 00:31:10.999 [2024-12-05 21:24:12.212011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:10.999 [2024-12-05 21:24:12.243456] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:31:12.642 4937.14 IOPS, 19.29 MiB/s [2024-12-05T20:24:15.021Z] 5718.75 IOPS, 22.34 MiB/s [2024-12-05T20:24:15.976Z] 6316.89 IOPS, 24.68 MiB/s [2024-12-05T20:24:16.918Z] 6811.70 IOPS, 26.61 MiB/s [2024-12-05T20:24:17.859Z] 7199.91 IOPS, 28.12 MiB/s [2024-12-05T20:24:18.801Z] 7523.25 IOPS, 29.39 MiB/s [2024-12-05T20:24:19.742Z] 7795.69 IOPS, 30.45 MiB/s [2024-12-05T20:24:21.126Z] 8032.07 IOPS, 31.38 MiB/s 00:31:19.689 Latency(us) 00:31:19.689 [2024-12-05T20:24:21.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.689 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:19.689 Verification LBA range: start 0x0 length 0x4000 00:31:19.689 Nvme1n1 : 15.00 8255.06 32.25 9860.99 0.00 7039.43 791.89 15182.51 00:31:19.689 [2024-12-05T20:24:21.126Z] =================================================================================================================== 00:31:19.689 [2024-12-05T20:24:21.126Z] Total : 8255.06 32.25 9860.99 0.00 7039.43 791.89 15182.51 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:19.689 rmmod nvme_tcp 00:31:19.689 rmmod nvme_fabrics 00:31:19.689 rmmod nvme_keyring 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2296711 ']' 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2296711 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2296711 ']' 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2296711 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2296711 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2296711' 00:31:19.689 killing process with pid 2296711 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2296711 00:31:19.689 21:24:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2296711 00:31:19.689 21:24:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:19.689 21:24:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:19.689 21:24:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:19.689 21:24:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:31:19.689 21:24:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:31:19.689 21:24:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:19.689 21:24:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:31:19.689 21:24:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:19.689 21:24:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:19.689 21:24:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.689 21:24:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.689 21:24:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:22.235 00:31:22.235 real 0m29.025s 00:31:22.235 user 1m3.175s 00:31:22.235 sys 0m8.192s 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:22.235 ************************************ 00:31:22.235 END TEST nvmf_bdevperf 00:31:22.235 ************************************ 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.235 ************************************ 00:31:22.235 START TEST nvmf_target_disconnect 00:31:22.235 ************************************ 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:22.235 * Looking for test storage... 00:31:22.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:31:22.235 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:22.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.236 --rc genhtml_branch_coverage=1 00:31:22.236 --rc genhtml_function_coverage=1 00:31:22.236 --rc genhtml_legend=1 00:31:22.236 --rc geninfo_all_blocks=1 00:31:22.236 --rc geninfo_unexecuted_blocks=1 00:31:22.236 00:31:22.236 ' 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:22.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.236 --rc genhtml_branch_coverage=1 00:31:22.236 --rc genhtml_function_coverage=1 00:31:22.236 --rc genhtml_legend=1 00:31:22.236 --rc geninfo_all_blocks=1 00:31:22.236 --rc geninfo_unexecuted_blocks=1 00:31:22.236 00:31:22.236 ' 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:22.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.236 --rc genhtml_branch_coverage=1 00:31:22.236 --rc genhtml_function_coverage=1 00:31:22.236 --rc genhtml_legend=1 00:31:22.236 --rc geninfo_all_blocks=1 00:31:22.236 --rc geninfo_unexecuted_blocks=1 00:31:22.236 00:31:22.236 ' 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:22.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.236 --rc genhtml_branch_coverage=1 00:31:22.236 --rc genhtml_function_coverage=1 00:31:22.236 --rc genhtml_legend=1 00:31:22.236 --rc geninfo_all_blocks=1 00:31:22.236 --rc geninfo_unexecuted_blocks=1 00:31:22.236 00:31:22.236 ' 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:22.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:31:22.236 21:24:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:30.376 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:30.376 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:30.376 Found net devices under 0000:31:00.0: cvl_0_0 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:30.376 Found net devices under 0000:31:00.1: cvl_0_1 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:30.376 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:30.377 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.377 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:30.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:30.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:31:30.638 00:31:30.638 --- 10.0.0.2 ping statistics --- 00:31:30.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.638 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:30.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:30.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:31:30.638 00:31:30.638 --- 10.0.0.1 ping statistics --- 00:31:30.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.638 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:30.638 21:24:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:30.638 ************************************ 00:31:30.638 START TEST nvmf_target_disconnect_tc1 00:31:30.638 ************************************ 00:31:30.638 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:31:30.638 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:30.638 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:31:30.638 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:30.638 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:30.638 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:30.638 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:30.638 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:30.638 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:30.638 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:30.638 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:30.639 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:30.639 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:30.901 [2024-12-05 21:24:32.174456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:30.901 [2024-12-05 21:24:32.174529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf58d00 with addr=10.0.0.2, port=4420 00:31:30.901 [2024-12-05 21:24:32.174563] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:30.901 [2024-12-05 21:24:32.174580] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:30.901 [2024-12-05 21:24:32.174588] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:31:30.901 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:30.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:30.901 Initializing NVMe Controllers 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:30.901 00:31:30.901 real 0m0.141s 00:31:30.901 user 0m0.067s 00:31:30.901 sys 0m0.073s 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:30.901 ************************************ 00:31:30.901 END TEST nvmf_target_disconnect_tc1 00:31:30.901 ************************************ 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:30.901 ************************************ 00:31:30.901 START TEST nvmf_target_disconnect_tc2 00:31:30.901 ************************************ 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2303433 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2303433 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2303433 ']' 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:30.901 21:24:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:30.901 [2024-12-05 21:24:32.335496] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:31:30.901 [2024-12-05 21:24:32.335542] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.163 [2024-12-05 21:24:32.441174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:31.163 [2024-12-05 21:24:32.484162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.163 [2024-12-05 21:24:32.484206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.163 [2024-12-05 21:24:32.484214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.163 [2024-12-05 21:24:32.484221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.163 [2024-12-05 21:24:32.484227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.163 [2024-12-05 21:24:32.486003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:31.163 [2024-12-05 21:24:32.486219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:31.163 [2024-12-05 21:24:32.486375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:31.163 [2024-12-05 21:24:32.486377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:31.737 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.737 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:31.737 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:31.737 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:31.737 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.999 Malloc0 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.999 [2024-12-05 21:24:33.244929] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.999 [2024-12-05 21:24:33.285370] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2303715 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:31.999 21:24:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:33.911 21:24:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2303433 00:31:33.911 21:24:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Write completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Write completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Write completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Write completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Read completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Write completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Write completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Write completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Write completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Write completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Write completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Write completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 Write completed with error (sct=0, sc=8) 00:31:33.911 starting I/O failed 00:31:33.911 [2024-12-05 21:24:35.320531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:33.911 [2024-12-05 21:24:35.321096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.321136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.321438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.321453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.321742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.321755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.322141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.322182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.322512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.322528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.322723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.322735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.323215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.323253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.323584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.323599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.323871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.323884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.324220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.324233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.324577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.324589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.325087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.325128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.325432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.325446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.325562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.325576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.325921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.325934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.326267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.326280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.326578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.326590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.326834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.326850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.327166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.327179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.327382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.327394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.327733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.327745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.328114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.328127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.328452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.328464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.328777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.328790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.329111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.329123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.329312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.329324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.329549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.329562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.329822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.329834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.330190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.330203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.330389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.330403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.911 [2024-12-05 21:24:35.330704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.911 [2024-12-05 21:24:35.330716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.911 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.330934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.330947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.331266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.331278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.331614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.331626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.331926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.331939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.332257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.332269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.332431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.332443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.332741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.332754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.332946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.332959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.333303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.333315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.333607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.333619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.333927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.333939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.334241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.334254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.334533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.334545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.334760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.334775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.335115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.335128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.335456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.335468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.335782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.335794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.336112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.336125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.336433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.336445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.336764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.336776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.337115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.337127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.337470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.337482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.337787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.337798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.338153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.338166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.338500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.338513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.338811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.338824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.339127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.339140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.339510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.339522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.339707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.339721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.340065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.340077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.340391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.340403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.340738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.340749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.341026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.341038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.341319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.341330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.341620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.341633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.341967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.341979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.342274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.342286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.342584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.342595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.342937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.342949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.343314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.343325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.343630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.343641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.343964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.343976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.344251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.344262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:33.912 [2024-12-05 21:24:35.345069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.912 [2024-12-05 21:24:35.345093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:33.912 qpair failed and we were unable to recover it. 00:31:34.184 [2024-12-05 21:24:35.345412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.184 [2024-12-05 21:24:35.345426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.184 qpair failed and we were unable to recover it. 00:31:34.184 [2024-12-05 21:24:35.345760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.184 [2024-12-05 21:24:35.345773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.184 qpair failed and we were unable to recover it. 00:31:34.184 [2024-12-05 21:24:35.346095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.184 [2024-12-05 21:24:35.346107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.184 qpair failed and we were unable to recover it. 00:31:34.184 [2024-12-05 21:24:35.346447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.346458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.346787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.346799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.347011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.347023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.347334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.347344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.347683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.347694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.348004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.348017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.348340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.348352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.348525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.348538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.348735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.348747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.349051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.349063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.349395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.349407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.349711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.349721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.350005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.350017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.350354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.350365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.350667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.350680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.350993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.351005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.351317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.351337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.351656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.351668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.351940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.351952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.352264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.352275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.352577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.352588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.352884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.352895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.353209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.353221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.353549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.353560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.353884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.353895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.354200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.354211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.354563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.354574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.354885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.354896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.355243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.355254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.355553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.355565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.355955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.355968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.356210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.356220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.356510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.356522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.356846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.356857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.357232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.357245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.357519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.357530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.357827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.357839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.358131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.358143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.358409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.358420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.358754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.358766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.359085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.359096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.359440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.359452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.359744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.359756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.360082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.360095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.360432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.360444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.360751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.360763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.360961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.360974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.361196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.361208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.361491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.361504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.361820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.361833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.362054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.362066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.362366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.185 [2024-12-05 21:24:35.362378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.185 qpair failed and we were unable to recover it. 00:31:34.185 [2024-12-05 21:24:35.362672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.362685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.362997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.363008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.363344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.363356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.363693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.363705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.364009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.364021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.364314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.364326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.364509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.364520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.364805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.364816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.365033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.365044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.365366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.365377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.365692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.365703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.365921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.365931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.366230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.366242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.366542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.366553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.366825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.366835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.367143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.367155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.367450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.367461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.367768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.367778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.368040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.368052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.368371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.368383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.368688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.368699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.369079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.369091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.369319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.369330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.369625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.369644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.369836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.369848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.370203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.370214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.370399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.370410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.370690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.370702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.371013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.371024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.371320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.371331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.371638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.371649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.371870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.371881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.372315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.372326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.372614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.372624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.372982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.372993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.373317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.373329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.373622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.373632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.373922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.373933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.374239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.374249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.374529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.374539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.374851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.374866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.375171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.375183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.375379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.375390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.375725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.375737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.376057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.376069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.376416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.376427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.376728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.376739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.377067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.377080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.377370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.377381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.377688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.377699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.377931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.377942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.186 [2024-12-05 21:24:35.378242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.186 [2024-12-05 21:24:35.378253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.186 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.378521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.378533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.378839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.378851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.379214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.379225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.379523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.379535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.379852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.379871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.380198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.380210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.380517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.380529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.380827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.380837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.381162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.381175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.381426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.381437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.381752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.381763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.382166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.382177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.382485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.382497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.382806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.382817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.383129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.383140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.383435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.383446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.383762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.383772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.384086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.384098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.384397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.384409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.384728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.384739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.385043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.385054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.385384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.385395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.385705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.385717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.385951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.385962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.386278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.386289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.386509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.386521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.386847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.386859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.387176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.387188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.387277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.387287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.387498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.387510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.387846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.387858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.388161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.388172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.388499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.388510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.388704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.388716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.388926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.388937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.389191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.389203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.389541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.389552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.389866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.389877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.390170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.390181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.390475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.390487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.390812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.390823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.391122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.391134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.391473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.391484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.391786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.391797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.392000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.392011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.392333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.392345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.392543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.392553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.392845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.392855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.393181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.393192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.393521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.393533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.393836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.393847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.394184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.187 [2024-12-05 21:24:35.394196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.187 qpair failed and we were unable to recover it. 00:31:34.187 [2024-12-05 21:24:35.394526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.394537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.394854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.394868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.395184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.395195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.395398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.395408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.395718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.395730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.396065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.396076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.396244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.396255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.396574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.396585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.396931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.396943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.397132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.397144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.397459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.397470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.397751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.397762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.397990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.398001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.398168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.398180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.398515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.398526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.398837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.398848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.399159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.399170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.399504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.399515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.399807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.399818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.400130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.400143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.400476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.400487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.400663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.400673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.401073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.401085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.401383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.401393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.401774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.401785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.402098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.402110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.402452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.402463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.402828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.402839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.403137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.403150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.403479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.403490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.403798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.403809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.404089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.404101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.404303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.404313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.404596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.404608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.404902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.404913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.405216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.405227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.405535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.405547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.405734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.405746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.406040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.406051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.406378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.406389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.406698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.406709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.407024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.407036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.407348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.407359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.407672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.407682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.407966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.407977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.408281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.408292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.408601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.188 [2024-12-05 21:24:35.408611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.188 qpair failed and we were unable to recover it. 00:31:34.188 [2024-12-05 21:24:35.408943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.408954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.409269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.409280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.409584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.409594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.409992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.410003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.410330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.410342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.410684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.410695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.411007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.411018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.411337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.411348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.411650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.411663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.411953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.411964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.412279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.412290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.412591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.412602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.412932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.412943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.413250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.413262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.413562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.413573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.413920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.413932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.414240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.414251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.414560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.414571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.414879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.414891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.415069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.415081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.415270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.415282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.415586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.415598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.415936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.415947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.416259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.416271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.416605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.416616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.416908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.416919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.417239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.417250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.417586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.417597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.417889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.417900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.418127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.418137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.418440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.418451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.418719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.418730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.419038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.419050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.419345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.419356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.419663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.419674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.419974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.419986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.420299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.420310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.420612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.420624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.420939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.420951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.421123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.421136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.421431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.421442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.421809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.421820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.422108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.422119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.422444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.422456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.422757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.422768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.423060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.423071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.423379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.423390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.423699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.423710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.424045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.424056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.424382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.424396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.424702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.424713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.425037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.425048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.425350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.189 [2024-12-05 21:24:35.425361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.189 qpair failed and we were unable to recover it. 00:31:34.189 [2024-12-05 21:24:35.425662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.425673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.426016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.426027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.426328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.426338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.426641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.426652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.426983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.426995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.427302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.427314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.427622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.427633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.427927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.427938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.428105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.428117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.428447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.428458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.428796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.428807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.429121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.429133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.429440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.429451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.429743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.429753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.430055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.430066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.430384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.430395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.430681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.430692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.431000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.431011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.431244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.431255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.431553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.431563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.431893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.431904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.432208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.432218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.432534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.432545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.432847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.432865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.433174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.433186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.433496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.433507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.433814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.433826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.434168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.434180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.434504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.434516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.434817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.434828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.435138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.435150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.435448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.435460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.435785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.435797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.436106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.436118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.436455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.436466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.436679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.436691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.436997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.437008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.437294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.437305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.437573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.437584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.437891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.437902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.438107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.438118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.438416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.438427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.438726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.438737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.439068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.439079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.439366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.439377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.439561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.439571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.439896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.439908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.440212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.440223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.440551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.440564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.440849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.440859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.441169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.441181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.441511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.441522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.441811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.190 [2024-12-05 21:24:35.441822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.190 qpair failed and we were unable to recover it. 00:31:34.190 [2024-12-05 21:24:35.442117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.442129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.442438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.442450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.442643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.442655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.442953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.442964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.443276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.443287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.443621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.443632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.443935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.443947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.444257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.444267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.444554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.444564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.444890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.444902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.445197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.445208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.445543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.445556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.445859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.445877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.446199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.446210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.446539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.446550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.446844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.446854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.447173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.447184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.447373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.447385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.447713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.447724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.448020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.448032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.448346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.448357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.448683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.448693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.449012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.449023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.449323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.449334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.449669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.449680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.449737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.449747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.450004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.450015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.450327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.450339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.450648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.450660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.450859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.450873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.451181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.451192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.451454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.451465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.451764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.451774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.452132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.452145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.452456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.452468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.452775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.452787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.453117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.453129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.453434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.453445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.453771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.453785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.454114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.454127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.454455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.454467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.454624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.454635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.454966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.454978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.455307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.455318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.455684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.455695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.455994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.456005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.456304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.456314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.456641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.191 [2024-12-05 21:24:35.456652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.191 qpair failed and we were unable to recover it. 00:31:34.191 [2024-12-05 21:24:35.456931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.456943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.457257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.457268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.457606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.457617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.457921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.457932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.458253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.458264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.458569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.458580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.458872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.458883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.459184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.459195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.459494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.459505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.459845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.459856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.460195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.460207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.460538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.460549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.460856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.460871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.461150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.461161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.461479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.461490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.461821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.461833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.462136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.462148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.462441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.462452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.462762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.462774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.463100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.463111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.463410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.463420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.463727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.463739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.464037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.464049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.464368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.464380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.464765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.464777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.465077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.465090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.465389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.465401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.465728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.465740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.466083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.466096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.466423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.466435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.466728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.466740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.467064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.467079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.467407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.467419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.467715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.467727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.468042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.468055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.468365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.468377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.468678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.468690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.469018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.469031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.469374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.469386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.469570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.469582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.469857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.469879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.470155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.470165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.470498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.470509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.470866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.470879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.471167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.471178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.471509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.471520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.471868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.471879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.472188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.472199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.472504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.472515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.472821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.472832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.473167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.473179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.473507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.473518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.473825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.473836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.192 qpair failed and we were unable to recover it. 00:31:34.192 [2024-12-05 21:24:35.474163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.192 [2024-12-05 21:24:35.474175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.474510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.474522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.474829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.474841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.475139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.475151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.475418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.475430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.475709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.475720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.476026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.476037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.476369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.476380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.476687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.476698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.476957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.476968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.477270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.477281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.477584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.477595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.477904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.477915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.478260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.478272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.478576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.478587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.478905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.478917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.479247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.479258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.479449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.479460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.479771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.479783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.480102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.480113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.480479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.480490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.480816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.480828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.481117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.481128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.481434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.481445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.481717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.481728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.482062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.482073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.482357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.482369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.482675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.482686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.483008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.483019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.483356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.483368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.483578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.483589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.483859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.483874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.484165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.484177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.484544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.484555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.484860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.484876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.485177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.485188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.485501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.485512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.485841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.485852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.486161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.486173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.486469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.486480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.486789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.486800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.487103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.487115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.487438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.487449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.487750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.487762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.488069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.488080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.488398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.488408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.488709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.488724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.489030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.489044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.489369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.489381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.489715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.489727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.490032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.490043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.490274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.490285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.490614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.193 [2024-12-05 21:24:35.490626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.193 qpair failed and we were unable to recover it. 00:31:34.193 [2024-12-05 21:24:35.490957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.490969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.491175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.491186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.491460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.491471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.491772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.491783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.492067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.492078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.492382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.492393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.492699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.492710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.493022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.493034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.493327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.493337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.493640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.493650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.493953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.493964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.494161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.494173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.494464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.494474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.494781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.494792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.495099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.495111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.495452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.495463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.495791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.495803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.496105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.496116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.496418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.496429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.496631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.496642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.496958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.496970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.497277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.497288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.497633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.497645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.497807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.497820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.498154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.498165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.498472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.498483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.498791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.498803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.499114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.499125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.499411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.499422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.499713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.499725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.499907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.499919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.500203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.500214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.500512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.500523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.500824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.500834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.501121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.501133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.501440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.501451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.501740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.501751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.501957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.501967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.502284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.502295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.502600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.502613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.502909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.502921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.503101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.503112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.503429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.503441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.503750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.503761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.504075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.504087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.504417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.504428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.504727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.504738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.505085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.505096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.505430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.194 [2024-12-05 21:24:35.505442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.194 qpair failed and we were unable to recover it. 00:31:34.194 [2024-12-05 21:24:35.505753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.505764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.506094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.506106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.506401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.506412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.506718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.506729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.507001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.507013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.507311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.507322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.507622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.507634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.507968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.507979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.508309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.508320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.508626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.508637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.508812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.508823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.509143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.509155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.509455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.509468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.509795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.509806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.510011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.510022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.510342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.510353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.510683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.510694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.510999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.511010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.511338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.511350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.511681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.511692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.511977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.511988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.512298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.512309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.512609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.512620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.512936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.512948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.513253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.513263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.513577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.513588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.513919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.513931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.514246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.514257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.514585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.514596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.514997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.515009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.515307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.515319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.515602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.515613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.515921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.515933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.516261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.516272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.516576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.516586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.516926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.516937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.517232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.517242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.517547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.517557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.517922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.517933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.518224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.518235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.518542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.518553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.518866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.518877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.519176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.519187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.519487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.519498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.519808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.519819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.520099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.520111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.520452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.520463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.520798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.520810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.521120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.521131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.521505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.521516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.521814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.521825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.522160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.522172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.522499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.522510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.522848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.195 [2024-12-05 21:24:35.522865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.195 qpair failed and we were unable to recover it. 00:31:34.195 [2024-12-05 21:24:35.523148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.523159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.523444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.523455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.523843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.523855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.524160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.524172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.524491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.524502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.524799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.524810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.525110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.525122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.525430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.525440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.525746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.525757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.526069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.526081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.526410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.526421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.526725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.526735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.527033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.527045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.527376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.527387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.527686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.527697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.527996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.528008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.528263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.528274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.528612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.528623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.528927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.528939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.529238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.529249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.529541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.529553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.529889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.529901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.530201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.530212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.530523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.530535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.530845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.530856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.531141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.531152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.531462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.531474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.531840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.531851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.532150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.532161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.532494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.532505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.532804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.532816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.533120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.533132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.533463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.533474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.533816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.533827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.534162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.534173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.534523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.534534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.534821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.534832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.535168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.535181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.535509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.535521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.535819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.535830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.536139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.536151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.536435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.536445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.536744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.536755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.537067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.537079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.537386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.537397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.537677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.537688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.537989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.538001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.538314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.538326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.538692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.538703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.539008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.539019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.539324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.539335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.539633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.539645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.539972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.196 [2024-12-05 21:24:35.539984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.196 qpair failed and we were unable to recover it. 00:31:34.196 [2024-12-05 21:24:35.540321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.540333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.540643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.540655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.540855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.540873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.541176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.541187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.541514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.541526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.541835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.541846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.542139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.542151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.542478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.542489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.542829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.542840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.543152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.543164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.543497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.543509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.543723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.543734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.544043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.544054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.544365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.544376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.544679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.544692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.545017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.545028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.545330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.545343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.545669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.545680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.545990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.546002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.546283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.546294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.546622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.546633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.546939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.546950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.547269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.547281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.547610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.547622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.547950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.547961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.548268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.548280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.548579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.548590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.548891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.548902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.549206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.549217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.549521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.549532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.549878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.549890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.550185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.550196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.550481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.550492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.550819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.550830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.551136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.551148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.551324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.551335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.551691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.551705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.552026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.552039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.552314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.552325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.552640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.552651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.552965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.552977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.553302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.553316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.553645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.553656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.553956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.553968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.554313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.554324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.554665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.554677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.554978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.554989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.555294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.555305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.555579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.555591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.555904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.555916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.556197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.556208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.556526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.556537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.556869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.556880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.197 qpair failed and we were unable to recover it. 00:31:34.197 [2024-12-05 21:24:35.557192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.197 [2024-12-05 21:24:35.557203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.557544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.557555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.557872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.557885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.558230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.558242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.558550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.558561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.558854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.558869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.559147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.559158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.559493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.559505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.559832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.559845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.560136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.560148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.560453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.560464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.560795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.560805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.561116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.561128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.561436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.561446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.561757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.561768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.562078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.562090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.562422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.562434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.562759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.562770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.563110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.563123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.563418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.563430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.563758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.563769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.564065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.564077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.564385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.564395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.564730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.564742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.565075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.565087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.565413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.565425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.565736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.565748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.566084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.566097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.566447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.566459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.566770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.566783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.566987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.566999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.567284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.567296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.567629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.567641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.567944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.567956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.568136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.568147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.568466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.568477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.568784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.568794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.569114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.569125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.569426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.569437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.569771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.569781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.570079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.570089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.570406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.570417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.570765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.570776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.571064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.571075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.571374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.571385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.571727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.571739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.572080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.572092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.572421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.572432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.198 [2024-12-05 21:24:35.572732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.198 [2024-12-05 21:24:35.572743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.198 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.573086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.573098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.573311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.573322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.573511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.573525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.573858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.573878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.574205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.574216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.574545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.574556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.574845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.574856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.575197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.575208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.575541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.575553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.575853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.575868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.576165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.576178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.576478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.576490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.576821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.576832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.577154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.577165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.577501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.577512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.577818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.577829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.578175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.578187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.578497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.578509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.578849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.578864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.579164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.579177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.579507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.579519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.579829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.579842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.580179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.580191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.580501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.580514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.580818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.580829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.581151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.581163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.581501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.581514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.581824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.581836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.582143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.582155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.582471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.582484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.582814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.582826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.583146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.583159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.583528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.583541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.583846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.583859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.584196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.584208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.584529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.584541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.584845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.584857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.585149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.585161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.585450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.585462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.585769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.585781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.586113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.586125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.586435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.586447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.586629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.586642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.586969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.586982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.587308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.587320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.587634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.587647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.587933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.587945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.588268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.588281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.588592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.588606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.588935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.588947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.589299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.589312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.589622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.589634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.589925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.199 [2024-12-05 21:24:35.589938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.199 qpair failed and we were unable to recover it. 00:31:34.199 [2024-12-05 21:24:35.590133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.590145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.590503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.590516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.590826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.590838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.591144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.591156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.591543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.591555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.591854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.591871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.592165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.592178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.592483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.592496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.592803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.592816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.593107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.593120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.593430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.593442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.593744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.593757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.594090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.594104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.594446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.594460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.594773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.594785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.595100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.595112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.595492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.595505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.595819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.595832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.596105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.596118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.596427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.596439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.596745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.596758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.597038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.597051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.597358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.597371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.597702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.597715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.598024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.598036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.598378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.598391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.598693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.598706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.599027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.599040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.599366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.599378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.599660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.599674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.599983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.599995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.600326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.600338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.600659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.600672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.600976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.600989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.601313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.601325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.601632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.601645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.601973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.601986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.602318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.602332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.602522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.602534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.602856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.602872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.603175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.603187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.603550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.603561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.603871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.603883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.604202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.604213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.604546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.604558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.604899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.604911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.605152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.605164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.605526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.605538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.605734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.605746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.605969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.605982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.606300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.606311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.606645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.606656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.606957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.606969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.200 qpair failed and we were unable to recover it. 00:31:34.200 [2024-12-05 21:24:35.607304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.200 [2024-12-05 21:24:35.607315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.201 qpair failed and we were unable to recover it. 00:31:34.201 [2024-12-05 21:24:35.607623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.201 [2024-12-05 21:24:35.607634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.201 qpair failed and we were unable to recover it. 00:31:34.201 [2024-12-05 21:24:35.607870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.201 [2024-12-05 21:24:35.607884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.201 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.608205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.608218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.608522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.608534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.608861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.608877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.609203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.609216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.609542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.609554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.609886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.609898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.610221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.610232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.610544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.610559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.610909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.610922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.611242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.611254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.611577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.611588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.611897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.611909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.612226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.612237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.612574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.612585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.612930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.612942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.613263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.613275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.613578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.613590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.613929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.613941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.614272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.614284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.614474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.614487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.614789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.614801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.615103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.615115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.615426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.615439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.615747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.615759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.616093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.616106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.616438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.616450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.616776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.616789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.617126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.617137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.617438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.617450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.617622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.617635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.617916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.617927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.618244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.618255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.618549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.618561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.618852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.618874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.619166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.619178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.619505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.619518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.474 qpair failed and we were unable to recover it. 00:31:34.474 [2024-12-05 21:24:35.619848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.474 [2024-12-05 21:24:35.619860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.620062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.620075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.620359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.620371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.620706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.620718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.621047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.621058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.621377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.621389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.621646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.621657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.621971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.621983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.622339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.622351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.622687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.622699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.623008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.623021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.623354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.623365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.623674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.623689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.624022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.624035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.624360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.624372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.624551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.624563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.624886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.624899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.625191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.625203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.625566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.625578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.625934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.625947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.626249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.626262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.626560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.626572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.626880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.626891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.627209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.627221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.627548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.627560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.627896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.627908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.628083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.628095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.628403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.628414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.628723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.628735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.629075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.629087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.629402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.629413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.629719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.629731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.630032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.630043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.630352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.630364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.630699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.630711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.631026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.631038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.631360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.631372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.631699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.631711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.632040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.632053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.632389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.632402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.632733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.632744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.633016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.633029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.633357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.633369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.633694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.633706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.633970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.633982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.634155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.634167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.634363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.634374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.634696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.634708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.635013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.635026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.635346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.635359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.635685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.635697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.636002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.636013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.636338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.475 [2024-12-05 21:24:35.636350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.475 qpair failed and we were unable to recover it. 00:31:34.475 [2024-12-05 21:24:35.636630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.636642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.636970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.636983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.637288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.637301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.637627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.637638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.637817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.637829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.638157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.638169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.638473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.638485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.638813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.638824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.639119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.639131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.639433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.639444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.639640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.639652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.639904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.639916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.640228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.640239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.640551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.640563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.640896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.640909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.641093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.641105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.641391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.641402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.641706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.641719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.641897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.641909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.642232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.642242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.642541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.642553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.642898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.642910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.643204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.643216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.643496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.643507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.643876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.643889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.644222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.644234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.644537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.644549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.644851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.644869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.645236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.645247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.645619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.645631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.645927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.645939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.646254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.646266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.646600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.646612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.646990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.647001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.647301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.647314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.647641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.647652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.647985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.647998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.648379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.648390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.648697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.648707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.649010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.649021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.649335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.649347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.649666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.649678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.649992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.650003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.650311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.650322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.650513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.650524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.650810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.650822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.651124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.651136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.651447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.651459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.651759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.651772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.652100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.652113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.652441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.652454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.652762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.652774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.476 qpair failed and we were unable to recover it. 00:31:34.476 [2024-12-05 21:24:35.653074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.476 [2024-12-05 21:24:35.653086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.653418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.653429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.653762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.653776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.653997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.654008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.654333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.654344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.654673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.654684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.654988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.655000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.655304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.655316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.655600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.655612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.655819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.655830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.656010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.656021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.656330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.656342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.656672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.656693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.657017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.657029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.657337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.657350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.657672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.657684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.658021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.658033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.658308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.658319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.658628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.658639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.658969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.658982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.659316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.659327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.659584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.659594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.659905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.659916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.660321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.660332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.660666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.660679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.661013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.661024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.661357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.661370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.661682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.661694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.661996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.662008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.662334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.662345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.662673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.662684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.662985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.662997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.663326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.663338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.663640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.663651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.663969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.663981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.664307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.664319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.664652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.664664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.664953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.664964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.665296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.665308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.665635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.665647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.665962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.665975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.666301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.666313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.666614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.666627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.666958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.666972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.667306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.667319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.667675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.667686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.477 qpair failed and we were unable to recover it. 00:31:34.477 [2024-12-05 21:24:35.667985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.477 [2024-12-05 21:24:35.667997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.668310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.668321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.668629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.668641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.668959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.668971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.669263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.669274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.669447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.669460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.669639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.669651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.669949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.669961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.670310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.670322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.670608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.670620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.670952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.670964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.671291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.671304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.671631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.671643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.671973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.671984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.672319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.672330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.672643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.672655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.672981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.672993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.673285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.673295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.673573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.673585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.673899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.673914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.674199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.674210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.674502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.674514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.674829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.674841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.674910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.674923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.675188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.675201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.675530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.675542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.675870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.675884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.676200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.676211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.676542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.676554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.676858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.676874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.677201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.677213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.677543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.677555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.677867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.677881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.678190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.678203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.678532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.678543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.678877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.678890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.679219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.679230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.679561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.679573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.679907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.679919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.680296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.680308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.680610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.680622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.680949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.680962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.681293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.681304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.681589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.681599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.681900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.681913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.682196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.682206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.682534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.682546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.682825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.682836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.683135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.683147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.683485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.683497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.683808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.683818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.684154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.684165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.684491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.684503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.478 [2024-12-05 21:24:35.684850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.478 [2024-12-05 21:24:35.684866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.478 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.685161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.685173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.685541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.685551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.685865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.685878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.686186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.686197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.686503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.686514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.686797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.686808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.687115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.687126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.687419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.687430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.687791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.687802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.688138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.688149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.688479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.688490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.688793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.688807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.689176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.689187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.689521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.689533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.689837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.689849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.690152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.690164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.690472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.690484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.690660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.690673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.690999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.691011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.691321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.691332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.691645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.691656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.691941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.691952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.692304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.692315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.692623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.692634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.692943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.692955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.693276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.693287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.693586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.693597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.693906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.693919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.694242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.694254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.694587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.694599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.694910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.694921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.695223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.695235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.695561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.695572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.695911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.695923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.696127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.696140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.696432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.696443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.696742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.696753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.696980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.696991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.697307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.697319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.697652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.697663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.697979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.697990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.698295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.698306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.698615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.698626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.698937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.698948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.699256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.699267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.699546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.699557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.699898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.699911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.700220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.700231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.700532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.700544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.700864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.700876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.701204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.701215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.701508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.701519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.701853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.479 [2024-12-05 21:24:35.701869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.479 qpair failed and we were unable to recover it. 00:31:34.479 [2024-12-05 21:24:35.702160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.702171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.702470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.702481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.702790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.702801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.703151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.703163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.703386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.703398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.703704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.703715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.703995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.704006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.704326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.704337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.704635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.704646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.704848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.704858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.705174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.705185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.705498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.705509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.705843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.705854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.706164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.706175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.706503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.706514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.706872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.706885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.707174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.707185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.707507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.707518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.707844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.707856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.708188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.708200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.708529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.708540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.708870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.708882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.709068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.709079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.709349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.709360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.709725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.709736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.710046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.710057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.710364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.710377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.710684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.710697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.710981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.710993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.711294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.711304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.711611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.711621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.711923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.711935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.712264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.712275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.712577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.712587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.712886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.712897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.713197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.713208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.713539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.713550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.713849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.713866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.714161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.714172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.714489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.714500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.714830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.714842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.715151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.715164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.715468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.715480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.715790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.715802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.716121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.716133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.716400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.716411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.716731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.716743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.717061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.717073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.480 [2024-12-05 21:24:35.717264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.480 [2024-12-05 21:24:35.717277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.480 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.717605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.717617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.717927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.717939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.718267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.718278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.718612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.718623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.718927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.718938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.719127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.719139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.719429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.719439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.719775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.719786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.720112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.720125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.720452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.720463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.720763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.720774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.721080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.721092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.721362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.721372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.721722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.721734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.722038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.722049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.722332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.722343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.722619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.722630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.722927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.722938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.723252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.723267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.723598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.723609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.723910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.723921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.724236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.724247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.724547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.724557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.724883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.724896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.725185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.725196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.725465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.725475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.725842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.725853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.726058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.726069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.726377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.726388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.726690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.726701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.727002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.727014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.727345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.727357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.727687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.727698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.728007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.728019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.728324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.728336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.728665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.728677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.728942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.728953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.729262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.729273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.729637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.729648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.729975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.729987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.730312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.730324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.730624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.730636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.730957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.730968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.731300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.731311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.731497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.731508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.731812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.731824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.732134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.732146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.732452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.732464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.732836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.732847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.733257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.733269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.733570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.733582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.481 [2024-12-05 21:24:35.733869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.481 [2024-12-05 21:24:35.733881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.481 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.734180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.734190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.734569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.734580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.734879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.734890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.735203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.735214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.735520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.735531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.735839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.735850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.736172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.736184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.736380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.736392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.736715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.736727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.737113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.737125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.737426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.737438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.737767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.737778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.738083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.738095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.738393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.738404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.738710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.738722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.739058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.739069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.739376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.739387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.739713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.739725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.740017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.740028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.740358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.740370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.740670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.740681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.741012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.741024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.741327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.741339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.741668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.741680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.741988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.741999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.742309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.742321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.742695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.742707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.743032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.743045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.743337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.743348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.743667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.743677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.744022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.744034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.744370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.744381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.744684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.744695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.745007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.745019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.745341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.745354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.745689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.745700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.746029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.746041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.746260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.746271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.746450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.746461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.746637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.746649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.746850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.746866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.747141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.747152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.747452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.747463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.747798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.747809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.748114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.748125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.748439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.748450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.748751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.748762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.749071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.749083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.749392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.749404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.749711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.749722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.750025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.750036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.750376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.750387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.482 [2024-12-05 21:24:35.750600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.482 [2024-12-05 21:24:35.750610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.482 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.750916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.750927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.751114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.751126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.751406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.751419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.751709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.751721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.752041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.752053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.752356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.752368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.752652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.752664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.753043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.753056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.753349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.753364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.753672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.753684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.754035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.754048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.754357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.754369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.754723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.754735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.755058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.755071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.755402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.755415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.755721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.755733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.756040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.756053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.756236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.756249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.756555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.756566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.756873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.756886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.757194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.757205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.757508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.757520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.757854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.757869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.757975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.757985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.758269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.758279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.758591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.758602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.758930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.758942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.759144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.759155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.759454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.759466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.759764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.759775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.760079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.760090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.760379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.760390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.760682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.760693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.761004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.761016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.761208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.761220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.761537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.761548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.761772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.761783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.762059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.762072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.762391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.762403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.762701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.762712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.763020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.763031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.763337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.763350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.763660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.763671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.764005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.764018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.764321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.764332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.764644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.764656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.764963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.764975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.765313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.765324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.765627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.765638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.765958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.765972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.766149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.766160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.766443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.766453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.483 [2024-12-05 21:24:35.766773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.483 [2024-12-05 21:24:35.766784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.483 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.767112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.767123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.767442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.767454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.767786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.767797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.768110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.768122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.768450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.768462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.768756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.768768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.769081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.769092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.769406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.769418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.769734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.769745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.770074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.770086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.770451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.770463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.770765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.770776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.771094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.771106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.771437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.771449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.771738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.771749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.772096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.772107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.772439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.772451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.772633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.772645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.772927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.772938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.773116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.773128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.773414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.773426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.773721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.773732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.774046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.774059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.774354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.774367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.774682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.774694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.775009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.775020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.775325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.775337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.775538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.775549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.775877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.775890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.776218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.776229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.776560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.776572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.776870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.776883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.777248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.777260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.777569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.777581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.777913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.777925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.778125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.778135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.778408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.778419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.778726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.778738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.779079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.779091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.779396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.779408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.779740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.779751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.779928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.779939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.780256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.780267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.780570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.780581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.780888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.780899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.781232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.781244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.781580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.781592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.484 [2024-12-05 21:24:35.781844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.484 [2024-12-05 21:24:35.781854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.484 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.782194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.782205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.782532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.782544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.782854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.782869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.783149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.783160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.783457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.783468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.783761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.783772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.784075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.784086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.784414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.784425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.784726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.784738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.785033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.785045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.785325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.785336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.785645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.785656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.785962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.785974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.786194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.786205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.786523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.786534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.786871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.786882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.787134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.787148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.787464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.787475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.787800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.787812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.788118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.788130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.788486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.788497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.788808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.788820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.789193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.789204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.789524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.789536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.789902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.789914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.790288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.790299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.790477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.790488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.790816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.790827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.791142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.791155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.791478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.791489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.791785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.791797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.792102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.792114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.792417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.792428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.792739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.792751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.793074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.793086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.793412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.793424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.793722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.793734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.794076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.794088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.794380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.794392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.794720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.794732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.795073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.795087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.795402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.795413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.795760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.795772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.796099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.796111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.796447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.796458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.796742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.796753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.797118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.797130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.797432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.797444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.797775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.797786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.798058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.798069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.798286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.798297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.798608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.798618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.485 [2024-12-05 21:24:35.798901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.485 [2024-12-05 21:24:35.798912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.485 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.799115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.799125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.799439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.799450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.799761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.799772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.799984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.799996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.800307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.800318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.800651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.800663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.800964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.800976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.801199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.801210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.801532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.801543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.801870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.801882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.802197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.802208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.802544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.802556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.802755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.802766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.803042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.803053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.803380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.803391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.803674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.803684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.803976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.803988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.804301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.804321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.804631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.804642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.804953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.804965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.805194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.805205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.805509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.805519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.805822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.805834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.806142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.806154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.806480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.806491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.806783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.806795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.806871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.806883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.807178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.807189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.807467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.807478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.807805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.807818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.808097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.808110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.808306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.808319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.808623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.808635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.808970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.808982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.809297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.809309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.809517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.809528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.809826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.809838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.810159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.810170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.810570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.810581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.810890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.810902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.811232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.811243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.811536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.811548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.811867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.811879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.812204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.812216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.812509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.812520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.812890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.812902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.813234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.813246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.813665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.813676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.813998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.814011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.814322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.814334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.814644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.814655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.814935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.814947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.815272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.486 [2024-12-05 21:24:35.815284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.486 qpair failed and we were unable to recover it. 00:31:34.486 [2024-12-05 21:24:35.815585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.815596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.815788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.815798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.816121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.816132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.816470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.816481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.816652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.816664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.817002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.817014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.817305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.817316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.817633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.817644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.817818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.817830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.818152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.818164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.818470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.818481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.818786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.818798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.819199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.819211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.819405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.819415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.819694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.819706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.819925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.819936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.820267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.820278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.820578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.820590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.820902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.820914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.821231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.821242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.821430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.821441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.821644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.821656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.821974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.821986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.822271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.822281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.822593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.822605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.822936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.822949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.823260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.823271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.823581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.823592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.823905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.823917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.824224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.824235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.824539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.824552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.824889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.824901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.825091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.825104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.825285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.825297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.825625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.825638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.825940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.825951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.826238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.826249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.826558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.826570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.826922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.826934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.827276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.827288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.827598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.827609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.827945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.827957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.828130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.828141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.828463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.828475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.828778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.828789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.829110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.829123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.829435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.829449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.829715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.829726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.487 [2024-12-05 21:24:35.830028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.487 [2024-12-05 21:24:35.830039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.487 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.830364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.830376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.830554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.830566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.830899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.830912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.831222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.831232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.831571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.831583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.831895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.831906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.832207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.832219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.832383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.832393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.832724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.832737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.833035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.833047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.833358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.833369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.833702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.833717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.834049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.834061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.834375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.834388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.834696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.834707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.834975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.834986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.835326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.835337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.835690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.835701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.836013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.836025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.836344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.836357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.836691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.836704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.837058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.837070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.837394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.837406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.837706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.837719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.838023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.838035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.838365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.838377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.838550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.838562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.838841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.838854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.839149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.839160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.839464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.839476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.839816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.839828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.840171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.840182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.840479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.840491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.840822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.840834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.841195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.841208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.841539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.841552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.841886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.841899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.842228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.842239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.842555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.842569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.842902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.842914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.843216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.843227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.843528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.843540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.843850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.843866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.844171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.844182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.844518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.844530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.844877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.844888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.845208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.845220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.845535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.845547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.845732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.845744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.846076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.846088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.846419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.846431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.846773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.846785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.488 [2024-12-05 21:24:35.847067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.488 [2024-12-05 21:24:35.847079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.488 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.847392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.847404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.847716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.847727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.848036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.848048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.848381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.848394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.848617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.848628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.848809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.848821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.848997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.849010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.849346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.849358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.849711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.849723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.850020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.850031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.850360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.850372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.850700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.850712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.851027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.851040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.851317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.851330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.851656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.851668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.851983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.851996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.852169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.852180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.852510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.852520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.852869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.852882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.853178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.853190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.853524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.853536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.853882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.853894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.854231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.854243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.854558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.854571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.854874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.854885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.855208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.855219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.855557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.855569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.855870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.855881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.856136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.856148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.856482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.856494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.856803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.856815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.857154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.857166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.857487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.857499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.857842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.857854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.858197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.858208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.858517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.858528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.858843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.858855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.859163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.859175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.859486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.859498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.859831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.859842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.860098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.860109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.860438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.860448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.860759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.860770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.861078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.861090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.861418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.861430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.861736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.861748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.862091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.862103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.862475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.862488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.862795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.862808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.863120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.863132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.863444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.863456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.863773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.863785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.864226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.489 [2024-12-05 21:24:35.864238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.489 qpair failed and we were unable to recover it. 00:31:34.489 [2024-12-05 21:24:35.864546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.864561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.864859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.864878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.865197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.865209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.865511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.865522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.865827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.865838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.866170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.866183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.866470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.866481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.866658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.866669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.866991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.867003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.867317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.867329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.867661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.867674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.868003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.868015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.868329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.868342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.868658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.868669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.869009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.869023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.869197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.869209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.869538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.869551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.869860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.869879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.870178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.870190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.870520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.870531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.870878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.870890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.871065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.871078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.871447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.871458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.871657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.871667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.871994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.872007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.872323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.872336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.872676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.872688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.873005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.873020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.873396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.873409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.873711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.873723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.874027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.874039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.874361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.874373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.874705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.874716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.875055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.875068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.875290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.875301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.875611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.875622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.875951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.875964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.876335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.876347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.876675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.876688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.876999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.877011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.877349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.877361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.877672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.877684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.878003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.878015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.878345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.878357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.878687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.878699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.879012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.879023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.879362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.879374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.490 [2024-12-05 21:24:35.879556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.490 [2024-12-05 21:24:35.879570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.490 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.879902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.879913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.880227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.880240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.880575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.880587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.880914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.880926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.881271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.881282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.881460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.881471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.881753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.881764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.882099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.882119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.882447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.882459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.882663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.882675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.883043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.883056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.883358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.883370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.883705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.883717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.884027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.884038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.884366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.884378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.884681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.884693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.885007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.885018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.885221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.885232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.885561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.885572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.885885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.885897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.886239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.886254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.886420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.886431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.886767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.886779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.886971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.886983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.887316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.887329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.887659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.887671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.888010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.888023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.888338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.888350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.888664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.888676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.889075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.889087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.889269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.889279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.889598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.889609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.889952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.889963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.890314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.890325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.890653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.890665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.890853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.890873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.891208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.891220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.891532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.891543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.891887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.891899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.892244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.892255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.892569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.892581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.892897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.892909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.893237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.893248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.893558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.893569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.893882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.893895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.894205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.894216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.894517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.894529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.894726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.894739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.895093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.895105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.895414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.895425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.895712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.895722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.491 [2024-12-05 21:24:35.896087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.491 [2024-12-05 21:24:35.896098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.491 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.896485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.896498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.896807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.896819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.897124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.897135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.897447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.897459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.897759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.897770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.898069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.898081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.898417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.898428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.898631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.898643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.898940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.898952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.899268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.899279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.899625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.899638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.899952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.899964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.900300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.900311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.900630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.900642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.900960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.900972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.901288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.901300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.901653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.901665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.901978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.901989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.902314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.902326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.902637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.902649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.902958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.902971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.903281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.903294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.903624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.903636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.903954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.903965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.904283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.904294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.904474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.904485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.904770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.904781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.905098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.905109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.905425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.905435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.905597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.905608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.905926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.905938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.906273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.906285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.906599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.906610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.906877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.906889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.907074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.907087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.907421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.907433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.907742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.907756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.908095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.908107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.908407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.908418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.908766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.908777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.908962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.908974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.909135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.909146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.909481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.909492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.909804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.909816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.910102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.910113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.910448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.910459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.910796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.910807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.911127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.911139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.911475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.911487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.911806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.911818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.912142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.912155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.912472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.912484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.912796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.912807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.913119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.767 [2024-12-05 21:24:35.913129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.767 qpair failed and we were unable to recover it. 00:31:34.767 [2024-12-05 21:24:35.913425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.913437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.913646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.913659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.913967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.913978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.914302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.914313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.914608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.914619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.914931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.914942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.915246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.915257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.915576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.915588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.915919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.915931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.916129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.916140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.916420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.916431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.916734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.916744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.917066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.917077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.917425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.917437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.917773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.917784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.918089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.918100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.918415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.918426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.918790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.918801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.919111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.919122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.919439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.919450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.919745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.919756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.920058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.920070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.920381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.920393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.920727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.920738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.921030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.921042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.921358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.921369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.921557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.921568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.921887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.921898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.922239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.922251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.922613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.922625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.922931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.922942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.923266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.923277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.923490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.923501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.923816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.923828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.924134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.924145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.924320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.924331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.924687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.924699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.924964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.924975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.925190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.925201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.925528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.925540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.925878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.925889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.926192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.926203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.926509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.926520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.926822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.926834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.927168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.927180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.927486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.927496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.927687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.927697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.927849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.927860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.928145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.928158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.928470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.928482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.928777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.928792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.929121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.929133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.929445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.929456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.929751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.929763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.930041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.930052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.930370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.930381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.930708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.930720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.931029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.931040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.931352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.931363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.931586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.931597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.931927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.931939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.932136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.932146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.932455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.932466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.768 [2024-12-05 21:24:35.932784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.768 [2024-12-05 21:24:35.932796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.768 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.933125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.933136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.933335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.933345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.933678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.933689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.933994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.934006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.934307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.934318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.934508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.934518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.934833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.934844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.935191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.935203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.935534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.935545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.935847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.935859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.936165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.936177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.936486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.936497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.936657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.936668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.936996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.937008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.937191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.937202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.937535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.937546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.937854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.937869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.938172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.938184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.938494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.938506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.938831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.938843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.939147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.939158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.939461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.939473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.939808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.939820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.940133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.940144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.940337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.940347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.940669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.940681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.940993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.941004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.941299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.941312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.941510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.941522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.941844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.941855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.942044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.942056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.942379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.942389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.942720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.942732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.942960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.942972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.943154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.943164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.943396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.943408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.943740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.943751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.944126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.944138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.944450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.944461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.944770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.944781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.945210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.945222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.945527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.945539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.945872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.945883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.946154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.946165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.946457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.946469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.946774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.946786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.946977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.946989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.947252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.947262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.947592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.947603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.947919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.947930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.948267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.948279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.948593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.948604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.948939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.948951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.949339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.949350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.949654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.949670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.769 qpair failed and we were unable to recover it. 00:31:34.769 [2024-12-05 21:24:35.950006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.769 [2024-12-05 21:24:35.950018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.950315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.950326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.950640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.950651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.950962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.950973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.951276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.951287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.951615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.951627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.951780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.951791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.952099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.952110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.952416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.952427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.952756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.952767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.953101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.953113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.953422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.953433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.953744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.953756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.954111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.954123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.954506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.954517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.954818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.954829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.955132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.955144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.955440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.955451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.955759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.955770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.955959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.955971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.956263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.956274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.956577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.956588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.956900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.956912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.957224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.957235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.957540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.957551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.957897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.957909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.958199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.958210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.958533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.958545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.958852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.958869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.959171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.959183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.959297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.959307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.959603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.959624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.959932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.959943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.960274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.960284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.960602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.960613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.960834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.960844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.961155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.961167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.961538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.961549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.961857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.961874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.962230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.962241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.962553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.962566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.962902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.962914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.963238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.963249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.963426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.963438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.963614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.963624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.963957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.963969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.964280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.964292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.964589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.964601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.964943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.964955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.965272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.965284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.965477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.965489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.965815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.965827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.966144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.966155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.966485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.966497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.966689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.966701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.967029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.967040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.967350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.967361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.967688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.967701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.967875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.967887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.968191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.968202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.968507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.968517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.968814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.968825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.969133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.969144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.969530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.969541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.969714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.969725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.770 [2024-12-05 21:24:35.970021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.770 [2024-12-05 21:24:35.970032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.770 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.970357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.970368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.970672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.970685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.970943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.970954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.971285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.971296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.971602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.971613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.971807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.971819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.972141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.972153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.972458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.972469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.972851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.972866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.973188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.973198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.973505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.973516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.973822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.973833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.974223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.974235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.974543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.974555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.974743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.974754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.975026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.975038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.975357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.975368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.975718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.975730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.975943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.975954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.976143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.976154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.976457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.976468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.976788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.976801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.977117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.977128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.977448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.977459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.977768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.977779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.978094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.978105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.978461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.978472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.978802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.978815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.979198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.979210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.979518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.979530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.979838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.979849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.980185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.980196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.980505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.980517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.980822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.980834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.981153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.981164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.981501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.981513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.981820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.981832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.982109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.982121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.982302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.982314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.982641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.982653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.982962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.982974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.983309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.983321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.983631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.983645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.983818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.983830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.984149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.984161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.984475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.984486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.984761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.984772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.985116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.985128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.985443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.985454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.985765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.985776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.985966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.985978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.986355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.986367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.986704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.986716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.987047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.987059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.987368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.987380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.987692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.987703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.988015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.988026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.988239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.988250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.988439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.988451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.988726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.988738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.989027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.989038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.989401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.989412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.989723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.989736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.990028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.990040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.771 [2024-12-05 21:24:35.990357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.771 [2024-12-05 21:24:35.990367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.771 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.990688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.990699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.991007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.991018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.991314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.991325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.991629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.991640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.992025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.992038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.992348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.992358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.992733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.992744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.993052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.993064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.993356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.993367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.993676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.993687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.994028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.994040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.994373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.994383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.994684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.994696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.995000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.995012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.995231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.995243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.995550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.995561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.995869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.995881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.996036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.996047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Write completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Write completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Write completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Write completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Write completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Write completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Write completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Write completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Write completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Write completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 Read completed with error (sct=0, sc=8) 00:31:34.772 starting I/O failed 00:31:34.772 [2024-12-05 21:24:35.996259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:34.772 [2024-12-05 21:24:35.996612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.996629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.997093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.997122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.997448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.997458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.997641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.997650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.998077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.998107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.998423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.998432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.998747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.998756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.999081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.999090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.999406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.999415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.999717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:35.999725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:35.999996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.000004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.000328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.000336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.000647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.000656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.000966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.000975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.001241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.001249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.001566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.001574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.001879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.001888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.002094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.002104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.002328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.002335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.002600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.002608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.002787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.002795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.002963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.002972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.003329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.003337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.003522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.003531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.003707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.003716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.003930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.003938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.004147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.004155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.004491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.004499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.004799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.772 [2024-12-05 21:24:36.004807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.772 qpair failed and we were unable to recover it. 00:31:34.772 [2024-12-05 21:24:36.005123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.005132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.005431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.005440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.005708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.005717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.006030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.006039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.006316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.006326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.006629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.006638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.006936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.006945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.007239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.007247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.007441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.007449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.007785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.007793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.008111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.008121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.008429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.008438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.008742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.008751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.009043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.009051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.009358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.009366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.009666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.009674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.010002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.010011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.010316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.010325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.010656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.010664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.010965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.010974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.011241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.011250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.011548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.011557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.011873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.011882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.012201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.012209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.012515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.012524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.012890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.012898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.013205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.013213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.013513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.013522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.013793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.013801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.014113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.014121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.014443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.014452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.014762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.014770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.015078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.015087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.015400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.015408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.015739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.015748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.016035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.016044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.016366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.016375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.016690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.016698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.016999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.017007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.017326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.017334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.017620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.017628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.017926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.017935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.018262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.018271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.018449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.018458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.018781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.018791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.019093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.019102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.019388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.019396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.019713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.019722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.019933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.019942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.020271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.020279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.020568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.020577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.020865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.020874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.021172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.021180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.021488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.021497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.021784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.021793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.022095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.022105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.022410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.022418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.022725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.022734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.023030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.023039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.023348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.023356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.023668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.023677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.023962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.023971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.024228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.024237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.773 [2024-12-05 21:24:36.024540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.773 [2024-12-05 21:24:36.024548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.773 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.024855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.024866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.025142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.025149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.025430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.025438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.025746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.025755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.026125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.026133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.026430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.026439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.026822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.026831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.027139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.027149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.027456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.027465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.027763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.027772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.028100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.028108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.028330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.028337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.028624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.028633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.028946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.028955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.029140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.029147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.029418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.029426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.029583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.029592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.029868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.029877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.030178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.030188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.030490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.030499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.030795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.030808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.031161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.031170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.031461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.031469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.031777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.031785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.031968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.031977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.032305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.032313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.032640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.032649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.032956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.032965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.033274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.033283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.033580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.033588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.033868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.033876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.034159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.034167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.034463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.034471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.034570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.034578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.034846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.034855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.035062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.035072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.035337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.035346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.035648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.035658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.036025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.036034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.036325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.036335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.036641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.036649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.036948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.036956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.037220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.037228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.037558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.037567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.037866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.037875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.038160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.038168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.038480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.038488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.038825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.038834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.039142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.039151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.039466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.039475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.039814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.039822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.040127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.040136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.040440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.040449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.040626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.040635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.040958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.040966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.041266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.041274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.041452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.041461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.041753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.041762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.042088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.042097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.042376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.042384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.042645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.042655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.042954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.042962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.043131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.043139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.774 qpair failed and we were unable to recover it. 00:31:34.774 [2024-12-05 21:24:36.043419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.774 [2024-12-05 21:24:36.043428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.043737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.043745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.044066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.044077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.044225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.044234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.044510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.044517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.044786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.044795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.045065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.045074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.045380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.045388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.045687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.045696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.045981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.045989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.046309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.046317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.046619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.046627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.046935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.046943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.047284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.047292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.047598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.047605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.047929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.047938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.048270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.048279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.048564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.048572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.048878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.048886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.049210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.049218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.049534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.049543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.049607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.049615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.049897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.049906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.050220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.050228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.050555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.050564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.050868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.050877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.051166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.051174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.051490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.051498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.051802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.051810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.052100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.052109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.052398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.052405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.052711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.052719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.053129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.053138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.053511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.053519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.053810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.053818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.054088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.054096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.054404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.054413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.054718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.054728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.055013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.055022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.055378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.055387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.055575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.055583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.055852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.055860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.056194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.056203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.056515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.056523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.056829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.056837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.057138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.057146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.057432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.057440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.057636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.057645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.057959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.057968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.058175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.058183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.058495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.058503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.058705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.058713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.058973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.058981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.059306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.059315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.059647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.059655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.059988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.059997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.060300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.060308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.060640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.060649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.060978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.060987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.061284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.775 [2024-12-05 21:24:36.061291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.775 qpair failed and we were unable to recover it. 00:31:34.775 [2024-12-05 21:24:36.061559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.061567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.061762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.061770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.062103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.062111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.062442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.062451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.062834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.062842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.063156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.063165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.063456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.063465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.063806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.063814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.064122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.064131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.064438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.064447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.064630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.064640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.064970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.064979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.065291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.065299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.065612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.065620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.065831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.065839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.066024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.066032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.066343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.066351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.066665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.066676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.067008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.067017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.067339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.067347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.067648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.067657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.067966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.067975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.068314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.068323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.068629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.068637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.068787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.068796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.069116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.069125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.069403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.069412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.069725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.069733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.070032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.070040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.070344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.070352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.070652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.070660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.071012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.071020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.071289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.071297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.071613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.071621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.071904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.071912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.072123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.072132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.072459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.072468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.072765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.072774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.073076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.073084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.073390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.073399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.073708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.073717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.074020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.074029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.074283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.074292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.074592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.074601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.074901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.074910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.075126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.075135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.075443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.075451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.075726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.075734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.076029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.076039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.076353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.076363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.076647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.076656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.076967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.076976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.077280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.077288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.077444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.077453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.077733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.077743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.078074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.078084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.078399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.078408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.078593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.078604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.078936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.078945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.079272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.079281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.079588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.079598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.079903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.079913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.080232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.080241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.080548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.776 [2024-12-05 21:24:36.080557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.776 qpair failed and we were unable to recover it. 00:31:34.776 [2024-12-05 21:24:36.080866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.080874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.081151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.081160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.081468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.081478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.081675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.081684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.081985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.081994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.082264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.082272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.082595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.082603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.082905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.082914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.082985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.082993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.083281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.083289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.083607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.083617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.083789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.083798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.084064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.084073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.084333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.084341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.084655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.084664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.084992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.085001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.085305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.085314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.085619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.085629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.085940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.085949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.086283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.086292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.086592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.086602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.086906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.086915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.087192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.087200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.087496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.087504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.087811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.087820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.088122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.088131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.088443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.088453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.088779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.088788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.089076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.089086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.089444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.089453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.089777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.089786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.090082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.090093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.090396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.090406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.090596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.090608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.090911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.090920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.091225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.091235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.091532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.091542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.091851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.091870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.092163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.092172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.092453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.092461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.092769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.092778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.093105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.093115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.093434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.093444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.093776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.093786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.094089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.094099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.094405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.094415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.094712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.094721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.095029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.095038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.095344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.095353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.095664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.095673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.095983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.095992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.096299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.096307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.096613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.096622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.096938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.096946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.097251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.097260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.097546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.097556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.097869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.097880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.098049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.098058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.098330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.098339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.098619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.098629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.098934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.098943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.099155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.099162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.099462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.099471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.099658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.777 [2024-12-05 21:24:36.099667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.777 qpair failed and we were unable to recover it. 00:31:34.777 [2024-12-05 21:24:36.099997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.100006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.100331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.100340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.100645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.100654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.101001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.101010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.101286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.101294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.101626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.101635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.101943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.101952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.102299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.102309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.102612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.102621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.102878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.102888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.103217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.103226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.103522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.103530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.103840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.103849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.104154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.104164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.104312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.104321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.104618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.104628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.104913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.104922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.105239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.105247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.105545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.105553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.105866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.105875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.106181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.106189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.106486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.106494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.106802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.106811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.107119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.107128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.107408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.107416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.107731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.107740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.108060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.108069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.108246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.108256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.108555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.108565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.108865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.108874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.109138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.109146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.109446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.109455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.109735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.109745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.110019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.110029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.110228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.110238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.110565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.110573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.110855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.110869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.111166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.111175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.111364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.111372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.111587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.111597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.111896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.111905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.112230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.112239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.112548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.112557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.112868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.112877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.113038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.113047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.113310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.113318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.113638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.113648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.113958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.113967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.114293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.114301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.114599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.114610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.114940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.114949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.115245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.115254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.115535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.115544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.115806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.115815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.116183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.116193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.116456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.116464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.116760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.116768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.117044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.117052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.117251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.117260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.117582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.778 [2024-12-05 21:24:36.117591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.778 qpair failed and we were unable to recover it. 00:31:34.778 [2024-12-05 21:24:36.117872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.117882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.118190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.118198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.118458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.118465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.118770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.118779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.119077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.119086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.119267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.119277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.119600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.119610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.119919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.119929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.120221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.120231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.120385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.120394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.120693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.120702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.121061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.121069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.121401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.121410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.121670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.121680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.121988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.121998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.122308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.122317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.122598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.122607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.122915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.122924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.123249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.123257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.123545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.123554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.123831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.123840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.124133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.124143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.124405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.124415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.124719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.124728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.124903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.124913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.125103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.125112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.125417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.125425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.125730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.125739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.126046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.126054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.126363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.126373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.126671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.126681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.126865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.126875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.127152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.127160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.127465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.127474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.127783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.127792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.128097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.128106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.128406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.128414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.128731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.128740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.129031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.129040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.129322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.129329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.129668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.129677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.129991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.130001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.130318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.130326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.130597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.130604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.130763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.130771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.131102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.131111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.131419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.131427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.131733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.131741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.131936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.131945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.132244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.132253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.132576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.132586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.132885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.132895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.133229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.133237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.133394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.133402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.133704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.133713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.133945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.133955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.134256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.134265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.134581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.134590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.134887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.134896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.135223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.135231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.135557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.135566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.135873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.135882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.136186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.136195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.136521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.136529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.779 qpair failed and we were unable to recover it. 00:31:34.779 [2024-12-05 21:24:36.136855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.779 [2024-12-05 21:24:36.136867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.137161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.137170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.137470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.137479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.137785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.137793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.138095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.138104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.138409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.138416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.138716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.138726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.138925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.138934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.139130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.139139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.139460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.139468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.139772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.139781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.140067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.140078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.140410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.140418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.140718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.140727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.141027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.141036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.141388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.141397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.141678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.141687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.141991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.142001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.142321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.142330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.142647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.142655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.142923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.142932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.143266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.143275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.143582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.143591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.143893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.143902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.144239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.144247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.144552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.144561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.144868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.144876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.145033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.145042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.145339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.145348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.145659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.145667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.145973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.145982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.146292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.146300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.146581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.146591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.146923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.146932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.147241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.147250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.147426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.147434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.147762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.147772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.148072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.148082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.148241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.148252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.148514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.148523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.148817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.148827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.149133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.149142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.149450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.149460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.149672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.149682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.149977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.149986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.150297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.150305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.150602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.150611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.150912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.150921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.151205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.151213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.151380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.151388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.151707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.151717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.152026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.152035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.152370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.152379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.152706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.152715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.153028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.153037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.153347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.153357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.153643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.153652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.153956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.153964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.154286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.154295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.780 qpair failed and we were unable to recover it. 00:31:34.780 [2024-12-05 21:24:36.154661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.780 [2024-12-05 21:24:36.154669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.154843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.154852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.155140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.155149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.155337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.155345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.155647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.155655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.155984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.155993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.156318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.156327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.156631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.156639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.156946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.156955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.157293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.157302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.157603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.157611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.157807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.157815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.158136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.158146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.158429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.158440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.158748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.158757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.159054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.159062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.159376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.159385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.159673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.159682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.159976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.159985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.160308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.160316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.160623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.160633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.161005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.161014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.161150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.161158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.161430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.161439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.161668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.161676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.161968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.161977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.162276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.162285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.162588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.162596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.162780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.162788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.163014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.163024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.163340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.163348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.163546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.163554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.163859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.163876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.164163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.164173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.164456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.164464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.164647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.164655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.164929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.164938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.165196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.165204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.165388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.165397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.165721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.165731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.166036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.166044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.166361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.166369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.166671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.166679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.166975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.166984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.167275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.167284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.167592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.167601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.167892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.167901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.168230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.168238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.168417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.168425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.168728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.168737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.169039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.169047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.169353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.169361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.169671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.169680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.170035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.170045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.170378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.170387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.170505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.170513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.170767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.170776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.171089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.171097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.171400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.171408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.171738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.171747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.172072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.172081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.172388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.172397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.172608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.172617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.172921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.172929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.781 qpair failed and we were unable to recover it. 00:31:34.781 [2024-12-05 21:24:36.173229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.781 [2024-12-05 21:24:36.173237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.173537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.173546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.173808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.173817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.174024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.174033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.174221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.174229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.174552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.174561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.174841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.174850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.175146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.175156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.175475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.175484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.175787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.175796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.176140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.176150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.176450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.176459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.176766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.176775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.177103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.177113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.177428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.177438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.177799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.177808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.178114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.178123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.178327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.178337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.178599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.178608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.178977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.178986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.179286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.179295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.179486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.179495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.179791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.179809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.180097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.180105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.180368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.180376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.180684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.180693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.181000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.181008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.181366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.181374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.181676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.181685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.181860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.181873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.182155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.182163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.182490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.182499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.182821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.182830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.183104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.183114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.183419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.183428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.183763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.183772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.184083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.184093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.184399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.184408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.184557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.184566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.184860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.184873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.185159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.185167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.185475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.185483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.185736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.185745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.185929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.185939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.186214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.186222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.186532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.186541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.186829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.186837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.187156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.187165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.187428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.187437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:34.782 [2024-12-05 21:24:36.187742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.782 [2024-12-05 21:24:36.187751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:34.782 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.188075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.188086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.188364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.188376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.188694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.188703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.189006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.189014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.189342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.189351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.189663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.189671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.189969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.189978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.190301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.190309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.190621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.190629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.190956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.190966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.191260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.191267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.191574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.191582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.191880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.191889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.192228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.192237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.192540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.192549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.192859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.192876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.193148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.193156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.193457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.193466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.193772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.193781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.194091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.194102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.194402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.194411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.194697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.194705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.195019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.195028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.195347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.195356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.195666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.195675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.195969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.195977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.196336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.196344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.196640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.196648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.196957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.196966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.197306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.197315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.197622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.197630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.197930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.197938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.198236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.115 [2024-12-05 21:24:36.198243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.115 qpair failed and we were unable to recover it. 00:31:35.115 [2024-12-05 21:24:36.198581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.198590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.198892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.198900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.199120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.199128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.199269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.199277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.199561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.199570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.199872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.199880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.200158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.200166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.200475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.200483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.200810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.200818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.201081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.201089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.201439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.201447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.201747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.201756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.201977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.201985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.202397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.202406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.202705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.202715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.203024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.203033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.203350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.203359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.203544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.203553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.203871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.203880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.204189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.204197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.204481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.204489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.204799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.204808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.205014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.205022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.205279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.205287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.205573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.205581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.205880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.205889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.206082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.206093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.206288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.206296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.206602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.206610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.206879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.206887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.207195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.207205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.207506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.207515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.207844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.207853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.208149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.208158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.208471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.208480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.208778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.208787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.209099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.209109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.209414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.209423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.209728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.209737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.210071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.210081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.210273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.210283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.210608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.210616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.210785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.210793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.211083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.211093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.211377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.211385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.211692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.211701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.212086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.212095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.212394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.212403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.212600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.212608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.212812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.212820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.213083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.213091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.213389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.213396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.213584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.213594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.213898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.213907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.214179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.214187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.214500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.214509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.214834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.214842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.215148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.215157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.215345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.215353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.215693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.215702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.216033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.216041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.116 [2024-12-05 21:24:36.216326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.116 [2024-12-05 21:24:36.216334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.116 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.216637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.216647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.216960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.216968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.217242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.217251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.217588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.217596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.217905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.217915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.218215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.218223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.218506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.218513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.218821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.218829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.218997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.219006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.219318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.219327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.219611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.219618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.219916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.219924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.220229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.220237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.220548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.220556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.220840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.220848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.221216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.221225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.221533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.221542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.221853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.221867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.222180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.222187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.222505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.222513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.222840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.222849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.223158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.223166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.223469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.223478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.223661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.223670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.223925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.223934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.224242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.224252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.224582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.224591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.224891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.224899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.225176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.225185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.225499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.225507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.225798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.225806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.225990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.225999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.226317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.226325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.226599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.226607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.226905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.226913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.227223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.227231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.227536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.227544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.227850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.227859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.228195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.228204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.228516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.228524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.228694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.228703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.229028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.229037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.229314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.229323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.229634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.229643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.229941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.229951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.230258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.230266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.230458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.230466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.230767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.230778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.231116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.231125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.231396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.231404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.231685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.231693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.231868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.231878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.232189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.232197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.232502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.232512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.232836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.232844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.233141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.233149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.233456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.233465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.233769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.233777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.234092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.234101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.234411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.234421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.234727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.117 [2024-12-05 21:24:36.234735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.117 qpair failed and we were unable to recover it. 00:31:35.117 [2024-12-05 21:24:36.235044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.235052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.235413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.235421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.235719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.235727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.236027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.236035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.236353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.236361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.236643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.236652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.236912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.236920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.237226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.237234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.237498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.237507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.237806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.237815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.238134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.238142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.238354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.238362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.238625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.238634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.238913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.238921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.239269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.239277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.239592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.239601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.239796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.239804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.240082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.240090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.240402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.240411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.240710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.240719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.241000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.241008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.241281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.241289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.241598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.241607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.241914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.241924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.242238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.242245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.242536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.242544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.242850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.242858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.243055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.243064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.243376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.243384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.243676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.243685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.243991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.243999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.244297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.244306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.244615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.244624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.244798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.244806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.245117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.245125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.245372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.245380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.245690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.245700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.246029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.246038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.246337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.246346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.246537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.246545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.246865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.246873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.247064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.247072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.247379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.247388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.247673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.247681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.247994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.248003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.248308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.248317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.248518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.248525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.248790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.248798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.249095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.249103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.249282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.249291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.249592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.249601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.249927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.249936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.250261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.250270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.250617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.250625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.118 qpair failed and we were unable to recover it. 00:31:35.118 [2024-12-05 21:24:36.250929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.118 [2024-12-05 21:24:36.250937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.251269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.251278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.251453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.251461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.251781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.251790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.252099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.252107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.252412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.252421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.252628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.252636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.252851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.252859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.253144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.253152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.253453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.253463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.253643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.253652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.253954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.253963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.254141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.254149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.254481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.254490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.254856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.254868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.255841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.255861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.256178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.256187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.256515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.256524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.256833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.256843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.257169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.257178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.257493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.257502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.257811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.257821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.258117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.258127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.258445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.258455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.258746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.258755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.259073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.259083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.259281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.259290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.259566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.259575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.259882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.259892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.260204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.260213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.260429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.260438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.260769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.260778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.260959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.260967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.261272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.261280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.261466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.261476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.261757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.261765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.262071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.262079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.262397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.262406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.262610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.262617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.262789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.262797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.262973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.262983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.263291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.263299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.263486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.263494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.263715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.263725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.264004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.264012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.264196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.264204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.264533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.264542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.264819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.264827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.265128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.265136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.265405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.265415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.265731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.265740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.266160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.266168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.266464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.266472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.266782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.266791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.266980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.266988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.267269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.267277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.267596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.267603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.267906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.267916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.268198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.268206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.268500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.268518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.268891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.268899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.269155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.119 [2024-12-05 21:24:36.269162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.119 qpair failed and we were unable to recover it. 00:31:35.119 [2024-12-05 21:24:36.269457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.269465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.269801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.269809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.270104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.270112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.270426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.270434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.270741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.270751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.270975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.270983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.271324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.271332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.271535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.271543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.271870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.271879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.272153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.272161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.272454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.272463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.272584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.272592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.272841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.272850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.273159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.273168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.273469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.273477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.273786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.273794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.274087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.274096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.274404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.274413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.274740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.274749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.275070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.275078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.275399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.275408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.275619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.275629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.275919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.275927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.276214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.276222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.276534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.276542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.276843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.276851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.277149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.277158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.277469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.277480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.277671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.277680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.277874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.277883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.278259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.278268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.278483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.278490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.278825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.278834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.279148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.279158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.279481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.279489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.279799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.279808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.280032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.280041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.280375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.280384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.280734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.280743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.280967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.280975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.281288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.281297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.281615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.281623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.281953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.281962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.282247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.282255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.282454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.282461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.282765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.282773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.283107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.283116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.283452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.283460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.283775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.283784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.284088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.284096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.284441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.284449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.284779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.284796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.285113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.285122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.285428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.285438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.285734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.285743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.285934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.285943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.286273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.286281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.286595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.286604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.286790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.286799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.287097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.287105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.120 [2024-12-05 21:24:36.287436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.120 [2024-12-05 21:24:36.287445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.120 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.287743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.287752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.288040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.288050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.288368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.288377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.288687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.288695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.288995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.289004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.289335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.289344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.289648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.289658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.289846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.289854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.290275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.290284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.290584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.290594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.290672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.290681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.290973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.290982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.291315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.291324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.291595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.291603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.291916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.291924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.291965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.291973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.292135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.292143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.292455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.292464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.292790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.292798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.293101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.293110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.293416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.293425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.293731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.293739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.294044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.294053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.294342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.294350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.294553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.294560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.294869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.294878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.295189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.295198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.295531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.295540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.295858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.295877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.296188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.296197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.296461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.296469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.296676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.296684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.297109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.297118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.297427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.297438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.297732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.297740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.298034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.298042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.298369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.298377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.298592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.298599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.298930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.298938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.299302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.299311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.299617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.299624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.299967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.299975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.300269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.300286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.300623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.300631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.300973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.300983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.301283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.301291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.301623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.301632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.301972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.301981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.302292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.302300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.302567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.302576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.302888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.302897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.303190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.303198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.303372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.303380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.303666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.121 [2024-12-05 21:24:36.303674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.121 qpair failed and we were unable to recover it. 00:31:35.121 [2024-12-05 21:24:36.303984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.303992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.304294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.304302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.304605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.304614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.304879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.304888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.305148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.305157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.305450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.305458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.305761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.305770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.305996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.306005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.306323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.306332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.306692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.306701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.307005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.307014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.307202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.307209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.307530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.307538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.307865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.307874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.308174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.308183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.308483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.308493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.308695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.308703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.308964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.308972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.309266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.309274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.309312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.309320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.309660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.309668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.309871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.309881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.310184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.310193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.310505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.310514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.310723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.310733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.311051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.311060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.311254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.311261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.311525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.311534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.311861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.311873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.312161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.312170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.312345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.312353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.312634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.312643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.312937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.312945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.313142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.313150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.313457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.313466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.313767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.313776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.314013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.314022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.314429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.314438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.314607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.314616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.314801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.314809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.315010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.315019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.315327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.315336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.315606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.315614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.315839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.315848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.316159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.316169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.316434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.316442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.316656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.316664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.317036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.317044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.317229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.317237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.317489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.317498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.317836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.317844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.318075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.318084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.318386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.318396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.318468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.318476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.318772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.318780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.319064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.319072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.319393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.319401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.319718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.319727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.320031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.320040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.122 [2024-12-05 21:24:36.320350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.122 [2024-12-05 21:24:36.320362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.122 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.320625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.320633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.320936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.320944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.321267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.321275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.321565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.321574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.321889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.321898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.322234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.322243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.322545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.322554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.322724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.322734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.323001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.323010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.323308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.323318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.323644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.323653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.323829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.323838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.324142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.324151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.324467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.324477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.324777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.324786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.324934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.324944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.325226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.325235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.325606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.325615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.325890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.325901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.326243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.326252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.326574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.326583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.326893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.326903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.327239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.327248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.327532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.327542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.327813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.327822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.328196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.328206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.328390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.328399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.328586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.328596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.328774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.328783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.329084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.329094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.329269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.329278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.329492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.329502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.329712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.329721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.330042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.330053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.330414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.330424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.330712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.330721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.330926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.330936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.331220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.331229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.331407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.331417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.331728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.331739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.331913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.331922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.332277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.332286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.332606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.332615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.332920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.332929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.333260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.333269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.333349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.333358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.333523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.333533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.333877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.333886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.334245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.334254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.334544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.334554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.334859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.334872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.335120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.335129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.335423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.335432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.335625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.335635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.335907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.335917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.336222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.336231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.336399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.336410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.336628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.336638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.336951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.123 [2024-12-05 21:24:36.336961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.123 qpair failed and we were unable to recover it. 00:31:35.123 [2024-12-05 21:24:36.337289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.337298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.337591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.337601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.337911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.337920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.338250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.338257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.338581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.338589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.338912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.338920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.339242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.339250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.339580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.339589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.339907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.339915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.340225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.340232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.340514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.340523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.340841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.340850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.341147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.341156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.341419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.341428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.341733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.341742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.342025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.342034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.342316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.342324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.342632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.342640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.342925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.342934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.343263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.343271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.343457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.343468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.343636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.343646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.343980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.343990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.344289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.344298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.344600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.344608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.344825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.344833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.345146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.345155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.345366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.345374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.345565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.345573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.345774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.345784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.346080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.346089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.346287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.346295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.346624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.346633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.346973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.346982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.347344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.347352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.347662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.347671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.347984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.347994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.348298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.348307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.348567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.348577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.348848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.348857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.349182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.349192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.349491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.349501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.349802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.349812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.350166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.350176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.350474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.350482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.350720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.350728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.351032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.351041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.351367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.351375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.351686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.351695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.351882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.351891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.352229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.352238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.352539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.352548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.352860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.352872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.353174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.353183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.124 [2024-12-05 21:24:36.353455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.124 [2024-12-05 21:24:36.353464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.124 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.353765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.353775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.354073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.354083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.354400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.354410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.354720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.354730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.354958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.354968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.355194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.355205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.355511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.355521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.355833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.355843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.356148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.356158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.356381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.356391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.356677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.356686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.356935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.356945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.357318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.357327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.357629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.357639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.357857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.357870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.358164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.358173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.358359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.358367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.358554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.358562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.358873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.358883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.359318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.359326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.359573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.359581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.359830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.359838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.360158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.360167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.360476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.360484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.360808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.360817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.361207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.361216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.361502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.361511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.361781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.361791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.362109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.362118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.362306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.362314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.362693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.362702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.362960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.362969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.363309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.363317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.363510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.363519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.363816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.363825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.364024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.364034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.364235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.364243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.364553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.364561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.364874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.364883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.365197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.365205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.365480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.365489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.365759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.365767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.366147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.366156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.366467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.366476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.366800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.366809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.367103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.367113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.367325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.367333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.367619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.367629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.367925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.367935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.368326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.368334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.368571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.368581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.368897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.368906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.369110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.369119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.369380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.369388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.369672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.369682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.369866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.369875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.370173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.370182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.370370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.370380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.370678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.370687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.371018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.125 [2024-12-05 21:24:36.371027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.125 qpair failed and we were unable to recover it. 00:31:35.125 [2024-12-05 21:24:36.371335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.371344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.371657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.371666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.372013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.372021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.372324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.372332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.372642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.372651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.372945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.372955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.373254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.373263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.373575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.373583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.373888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.373897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.374223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.374232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.374520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.374529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.374836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.374845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.374954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.374963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.375283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.375293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.375572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.375581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.375880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.375890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.376173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.376183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.376439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.376449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.376756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.376765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.377094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.377104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.377418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.377428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.377635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.377644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.377956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.377966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.378195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.378204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.378507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.378515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.378827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.378839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.379061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.379069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.379265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.379274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.379545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.379554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.379765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.379774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.380088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.380098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.380254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.380263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.380561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.380569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.380889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.380899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.381267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.381275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.381584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.381592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.381891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.381900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.382087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.382096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.382305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.382315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.382602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.382610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.382902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.382911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.383193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.383201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.383516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.383525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.383833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.383841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.384076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.384084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.384422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.384430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.384732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.384741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.384948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.384958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.385238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.385247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.385515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.385523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.385850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.385859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.386172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.386182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.386369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.386377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.386639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.386649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.386855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.386869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.126 [2024-12-05 21:24:36.387186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.126 [2024-12-05 21:24:36.387194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.126 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.387387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.387396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.387598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.387606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.387786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.387795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.387991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.388000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.388177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.388186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.388337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.388346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.388542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.388551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.388870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.388879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.389252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.389260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.389583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.389593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.389906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.389915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.390203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.390212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.390525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.390534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.390835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.390844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.390933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.390942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.391192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.391202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.391542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.391551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.391853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.391866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.392076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.392084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.392300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.392307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.392537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.392545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.392752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.392760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.393216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.393225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.393496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.393504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.393836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.393845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.394097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.394105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.394326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.394334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.394712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.394721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.395046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.395054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.395396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.395404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.395650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.395658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.395973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.395982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.396254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.396262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.396577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.396586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.396743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.396751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.396960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.396969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.397353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.397362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.397571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.397580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.397908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.397916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.398121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.398129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.398346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.398355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.398672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.398680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.398884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.398893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.399095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.399103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.399449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.399457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.399763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.399772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.400169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.400177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.400395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.400403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.400732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.400740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.400946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.400956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.401227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.401237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.401559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.401568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.401925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.401934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.402249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.402257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.402549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.402558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.402858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.402875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.403081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.403089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.403452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.403461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.403633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.127 [2024-12-05 21:24:36.403642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.127 qpair failed and we were unable to recover it. 00:31:35.127 [2024-12-05 21:24:36.403811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.403820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.404188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.404197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.404504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.404514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.404817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.404826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.405064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.405074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.405384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.405393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.405694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.405703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.405999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.406007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.406187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.406195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.406496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.406504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.406808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.406816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.407156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.407164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.407333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.407341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.407526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.407535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.407838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.407847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.408275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.408283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.408619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.408628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.408951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.408960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.409251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.409259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.409558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.409567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.409776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.409786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.410101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.410110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.410429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.410438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.410621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.410630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.410837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.410846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.411042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.411052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.411346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.411356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.411671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.411679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.411984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.411992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.412325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.412334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.412660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.412671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.413057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.413066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.413398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.413407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.413777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.413785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.414123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.414132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.414383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.414391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.414719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.414728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.415003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.415012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.415222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.415230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.415531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.415540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.415853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.415865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.416170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.416179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.416339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.416347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.416628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.416637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.416709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.416718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.416911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.416920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.417135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.417143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.417310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.417317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.417656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.417664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.417846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.417855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.418161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.418170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.418485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.418503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.418812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.418820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.419185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.419194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.419540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.419549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.419851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.419860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.420094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.420102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.420423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.420431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.420767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.420777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.128 [2024-12-05 21:24:36.421040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.128 [2024-12-05 21:24:36.421048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.128 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.421379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.421388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.421704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.421713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.421937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.421946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.422186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.422194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.422514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.422522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.422869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.422878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.423195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.423204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.423507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.423517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.423800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.423808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.424172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.424180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.424495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.424505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.424829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.424838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.425045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.425053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.425353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.425362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.425701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.425710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.425937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.425945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.426337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.426345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.426550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.426558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.426816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.426824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.427063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.427071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.427264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.427273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.427594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.427603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.427946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.427954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.428245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.428253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.428570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.428579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.428902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.428912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.429109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.429118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.429321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.429330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.429511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.429520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.429871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.429879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.430163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.430172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.430483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.430491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.430792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.430800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.430982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.430991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.431329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.431338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.431640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.431650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.431977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.431985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.432300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.432309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.432535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.432543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.432727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.432734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.433007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.433016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.433285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.433294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.433675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.433683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.433990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.433999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.434340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.434348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.434651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.434660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.434936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.434945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.435119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.435127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.435457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.435465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.435774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.435783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.436118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.436128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.436276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.436284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.436469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.436477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.436741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.436749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.129 qpair failed and we were unable to recover it. 00:31:35.129 [2024-12-05 21:24:36.437062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.129 [2024-12-05 21:24:36.437071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.437328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.437337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.437650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.437658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.437985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.437994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.438304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.438313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.438481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.438490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.438813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.438821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.439205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.439214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.439420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.439428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.439617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.439633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.439908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.439917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.440201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.440209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.440539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.440548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.440824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.440832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.441065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.441074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.441407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.441416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.441556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.441565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.441885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.441895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.442194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.442203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.442511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.442520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.442828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.442836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.443158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.443167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.443477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.443485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.443809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.443819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.444217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.444225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.444641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.444650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.444988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.444997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.445295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.445303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.445582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.445590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.445872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.445881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.446200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.446209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.446396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.446405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.446717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.446726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.447044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.447053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.447370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.447379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.447602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.447612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.447925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.447933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.448151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.448159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.448249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.448256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.448546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.448554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.448865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.448873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.449171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.449180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.449495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.449503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.449667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.449676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.449935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.449944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.450264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.450273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.450575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.450583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.450769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.450777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.451131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.451140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.451448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.451456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.451798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.451808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.451986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.451996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.452305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.452313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.452619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.452628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.452950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.452959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.453169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.453177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.453474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.453483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.453763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.453770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.454151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.454160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.130 [2024-12-05 21:24:36.454460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.130 [2024-12-05 21:24:36.454468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.130 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.454768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.454776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.454935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.454944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.455249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.455258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.455575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.455586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.455887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.455896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.456170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.456179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.456381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.456389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.456664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.456672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.456977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.456985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.457145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.457154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.457500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.457509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.457695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.457703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.458022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.458031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.458351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.458359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.458663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.458672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.458892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.458901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.459194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.459202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.459517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.459526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.459811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.459819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.460135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.460144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.460456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.460465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.460773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.460781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.461091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.461099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.461296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.461304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.461615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.461624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.461956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.461965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.462308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.462317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.462384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.462392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.462722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.462731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.463060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.463069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.463397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.463406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.463736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.463745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.464073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.464081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.464326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.464334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.464647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.464655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.464967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.464976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.465334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.465342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.465648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.465656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.465868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.465877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.465972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.465979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.466305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.466314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.466624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.466632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.466937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.466945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.467263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.467275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.467622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.467631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.467872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.467880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.468196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.468204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.468512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.468521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.468712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.468721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.468987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.468997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.469321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.469330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.469506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.469515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.469850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.469859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.470255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.470263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.470572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.470582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.470763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.470772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.471126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.471135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.471457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.471465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.471667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.471675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.131 [2024-12-05 21:24:36.471957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.131 [2024-12-05 21:24:36.471966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.131 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.472053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.472062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.472363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.472371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.472706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.472714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.472924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.472932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.473275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.473283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.473587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.473596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.473937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.473947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.474265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.474273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.474603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.474612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.474796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.474805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.475120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.475129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.475328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.475336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.475653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.475662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.475979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.475988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.476390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.476398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.476708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.476716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.477127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.477136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.477417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.477426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.477676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.477685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.477968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.477977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.478205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.478213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.478541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.478550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.478760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.478768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.479078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.479088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.132 [2024-12-05 21:24:36.479422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.132 [2024-12-05 21:24:36.479430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.132 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.479761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.479772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.410 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.480089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.480099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.410 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.480413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.480422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.410 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.480715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.480723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.410 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.480937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.480946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.410 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.481271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.481279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.410 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.481618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.481626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.410 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.481816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.481825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.410 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.482125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.482133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.410 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.482344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.482352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.410 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.482654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.482664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.410 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.482849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.482859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.410 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.483187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.483196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.410 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.483505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.483513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.410 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.483818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.483827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.410 qpair failed and we were unable to recover it. 00:31:35.410 [2024-12-05 21:24:36.484125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.410 [2024-12-05 21:24:36.484135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.484441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.484450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.484767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.484776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.485100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.485109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.485418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.485427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.485720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.485729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.486040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.486050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.486379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.486388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.486580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.486590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.486766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.486775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.486947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.486956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.487240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.487249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.487447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.487456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.487761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.487770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.487965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.487975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.488354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.488364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.488649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.488658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.488986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.488996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.489250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.489259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.489476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.489485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.489797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.489806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.490100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.490109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.490414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.490423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.490739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.490750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.491063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.491073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.491234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.491243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.491560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.491569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.491865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.491875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.492310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.492318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.492502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.492510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.492831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.492839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.493160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.493169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.493479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.493487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.493800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.493808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.494090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.494099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.494352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.494361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.494645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.494654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.494936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.494945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.495212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.495220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.495475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.495483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.495779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.495787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.496152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.496161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.496459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.496468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.496791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.496800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.497115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.497124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.497449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.497459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.497843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.497852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.498244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.498254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.498499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.498508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.498806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.498815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.499045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.499054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.499254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.499263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.499447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.499456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.499778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.499787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.500109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.500119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.500420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.500430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.500719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.500728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.500938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.500946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.501300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.411 [2024-12-05 21:24:36.501309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.411 qpair failed and we were unable to recover it. 00:31:35.411 [2024-12-05 21:24:36.501497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.501507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.501821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.501830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.502021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.502031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.502301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.502310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.502569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.502580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.502924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.502933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.503266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.503275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.503530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.503539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.503855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.503876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.504174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.504182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.504506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.504515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.504713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.504722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.505046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.505055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.505373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.505382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.505699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.505708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.505867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.505878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.506081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.506090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.506299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.506308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.506617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.506625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.506856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.506868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.506953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.506961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.507244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.507252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.507428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.507438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.507655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.507663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.507972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.507980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.508311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.508319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.508595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.508603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.508800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.508809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.509018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.509027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.509298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.509306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.509467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.509476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.509809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.509817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.510125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.510133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.510436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.510444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.510652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.510660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.510860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.510872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.511161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.511168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.511497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.511505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.511836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.511845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.512161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.512169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.512365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.512372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.512634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.512643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.512991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.513001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.513321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.513330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.513645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.513655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.513851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.513860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.514300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.514308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.514614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.514623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.514729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.514737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.515053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.515061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.515389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.515397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.515730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.515738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.515972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.515980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.516304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.516311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.516625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.516632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.516929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.516937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.517257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.517265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.517471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.517479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.517776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.517785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.412 qpair failed and we were unable to recover it. 00:31:35.412 [2024-12-05 21:24:36.518178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.412 [2024-12-05 21:24:36.518185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.518395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.518403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.518746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.518754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.519089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.519097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.519428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.519437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.519631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.519639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.519816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.519824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.520112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.520121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.520412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.520420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.520722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.520731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.521068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.521077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.521377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.521385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.521563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.521572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.521952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.521960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.522342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.522350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.522598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.522606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.522921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.522929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.523261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.523269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.523575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.523583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.523759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.523769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.524147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.524155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.524457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.524465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.524677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.524685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.525045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.525054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.525226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.525235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.525544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.525554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.525750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.525758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.525946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.525955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.526220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.526228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.526544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.526551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.526859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.526871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.527084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.527093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.527260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.527268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.527608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.527616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.527809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.527817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.527990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.527998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.528321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.528330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.528644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.528651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.528990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.528999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.529318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.529326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.529662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.529670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.529952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.529960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.530223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.530231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.530535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.530544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.530826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.530834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.530986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.530994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.531227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.531236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.531561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.531570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.531882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.531890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.532205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.532213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.532523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.532531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.532798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.532806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.533110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.533118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.413 qpair failed and we were unable to recover it. 00:31:35.413 [2024-12-05 21:24:36.533433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.413 [2024-12-05 21:24:36.533441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.533637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.533645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.533929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.533938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.534241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.534249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.534553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.534561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.534851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.534859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.535223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.535231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.535526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.535534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.535847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.535855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.536033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.536051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.536369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.536377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.536713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.536721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.537035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.537047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.537400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.537408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.537612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.537620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.537936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.537944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.538235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.538243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.538573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.538580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.538849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.538858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.539178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.539186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.539450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.539459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.539787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.539796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.540085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.540093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.540423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.540432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.540722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.540730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.541028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.541037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.541375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.541384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.541692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.541700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.542020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.542028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.542274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.542283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.542583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.542591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.542920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.542928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.543234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.543243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.543605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.543614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.543929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.543937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.544200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.544208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.544413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.544421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.544710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.544718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.544904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.544913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.545134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.545141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.545468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.545476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.545780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.545788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.546020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.546028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.546348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.546356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.546565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.546574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.546912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.546920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.547255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.547263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.547562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.547570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.547789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.547798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.548024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.548032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.548210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.548219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.548490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.548498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.548680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.548691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.548953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.548961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.549294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.549302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.549587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.549595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.549928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.549936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.550181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.550189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.550515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.550523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.550689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.550699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.414 qpair failed and we were unable to recover it. 00:31:35.414 [2024-12-05 21:24:36.550982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.414 [2024-12-05 21:24:36.550990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.551298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.551306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.551614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.551622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.551810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.551819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.552136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.552144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.552372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.552380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.552686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.552695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.552962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.552970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.553287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.553295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.553533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.553541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.553850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.553858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.554168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.554176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.554509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.554517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.554841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.554849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.555166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.555175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.555532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.555541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.555883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.555891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.556205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.556213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.556536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.556544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.556833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.556841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.557155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.557164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.557382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.557390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.557711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.557719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.557914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.557923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.558261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.558269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.558591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.558600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.558803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.558812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.559024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.559032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.559353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.559361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.559561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.559568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.559871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.559879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.560186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.560194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.560505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.560515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.560697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.560706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.561016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.561025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.561234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.561242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.561449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.561458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.561691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.561699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.561983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.561991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.562428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.562436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.562750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.562758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.562944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.562954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.563135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.563143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.563456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.563464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.563777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.563786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.564132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.564141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.564333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.564340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.564522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.564531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.564795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.564803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.565026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.565034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.565199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.565208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.565482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.565490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.565824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.565832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.566121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.566130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.566499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.566507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.566793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.566801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.567121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.567129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.567448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.567457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.567775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.567783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.567982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.415 [2024-12-05 21:24:36.567990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.415 qpair failed and we were unable to recover it. 00:31:35.415 [2024-12-05 21:24:36.568199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.568208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.568423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.568431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.568757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.568765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.568943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.568953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.569277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.569284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.569597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.569606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.569797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.569805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.570109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.570117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.570423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.570431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.570742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.570751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.571062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.571070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.571173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.571180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.571478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.571488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.571833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.571842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.572162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.572170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.572461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.572470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.572678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.572687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.572987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.572995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.573305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.573313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.573475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.573484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.573783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.573791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.574099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.574108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.574412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.574421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.574606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.574615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.574936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.574944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.575346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.575354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.575648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.575657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.575941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.575949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.576286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.576295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.576459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.576468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.576758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.576767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.577178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.577186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.577393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.577401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.577713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.577722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.578047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.578056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.578364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.578373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.578676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.578686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.579006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.579016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.579424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.579432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.579617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.579627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.579980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.579989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.580315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.580324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.580632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.580640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.580844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.580853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.581169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.581177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.581461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.581469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.581784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.581792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.582163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.582172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.582469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.582478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.582787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.582795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.582927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.582935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.583110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.583118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.583419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.583431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.583733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.583742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.416 [2024-12-05 21:24:36.584026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.416 [2024-12-05 21:24:36.584035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.416 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.584358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.584367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.584677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.584687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.584937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.584945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.585245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.585253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.585576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.585584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.585849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.585857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.586205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.586213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.586530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.586539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.586841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.586850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.587035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.587045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.587213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.587222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.587493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.587502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.587816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.587825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.588171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.588180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.588369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.588377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.588715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.588724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.589072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.589080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.589386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.589394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.589702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.589710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.590024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.590032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.590374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.590382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.590537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.590554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.590776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.590785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.590974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.590982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.591307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.591316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.591635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.591644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.591959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.591967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.592254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.592263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.592592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.592600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.592871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.592881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.593062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.593072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.593356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.593364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.593676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.593685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.594009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.594018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.594198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.594207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.594515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.594523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.594742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.594750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.595034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.595042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.595359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.595368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.595593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.595602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.595904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.595913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.596233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.596242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.596533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.596542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.596782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.596791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.597101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.597109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.597430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.597439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.597767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.597776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.598091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.598099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.598418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.598426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.598756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.598764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.599103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.599112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.599399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.599408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.599697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.599706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.600027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.600035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.600350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.600358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.600561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.600570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.600888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.600897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.601087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.601097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.601302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.601311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.417 [2024-12-05 21:24:36.601611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.417 [2024-12-05 21:24:36.601619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.417 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.601907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.601916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.602105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.602114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.602406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.602414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.602731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.602739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.603056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.603066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.603400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.603408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.603730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.603740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.604080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.604088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.604306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.604314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.604539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.604547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.604840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.604849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.605135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.605144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.605342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.605351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.605675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.605683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.606148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.606157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.606470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.606478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.606818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.606826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.607170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.607180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.607340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.607348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.607646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.607655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.607987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.607997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.608327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.608336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.608675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.608684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.609024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.609033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.609293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.609301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.609605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.609614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.609925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.609934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.610234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.610242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.610503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.610511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.610717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.610726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.611051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.611061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.611384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.611393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.611591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.611600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.611909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.611918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.612205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.612214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.612516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.612526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.612844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.612853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.613162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.613171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.613463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.613472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.613774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.613783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.613965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.613974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.614315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.614324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.614661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.614669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.614965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.614973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.615289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.615299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.615601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.615609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.615811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.615820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.616103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.616113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.616439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.616447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.616750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.616758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.617073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.617081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.617292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.617299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.617573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.617581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.617884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.617892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.618085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.618093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.618254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.618262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.618445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.618455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.618668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.618676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.418 qpair failed and we were unable to recover it. 00:31:35.418 [2024-12-05 21:24:36.618888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.418 [2024-12-05 21:24:36.618897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.619086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.619102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.619428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.619436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.619768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.619776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.620087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.620096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.620175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.620182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.620459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.620467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.620633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.620642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.620923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.620932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.621262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.621270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.621443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.621452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.621746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.621754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.621916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.621925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.622255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.622263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.622564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.622572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.622733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.622742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.623071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.623079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.623293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.623301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.623606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.623614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.623891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.623899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.624220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.624229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.624542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.624550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.624851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.624859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.625164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.625172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.625343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.625352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.625652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.625660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.625973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.625983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.626214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.626221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.626495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.626503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.626676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.626685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.626866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.626876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.627114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.627122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.627425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.627433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.627606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.627614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.627928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.627937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.628265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.628273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.628585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.628593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.628881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.628890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.629165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.629172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.629482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.629490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.629775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.629783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.630078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.630086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.630393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.630402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.630756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.630765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.631083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.631091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.631394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.631402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.631587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.631596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.631898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.631906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.632244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.632252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.632559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.632567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.632881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.632889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.633146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.633154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.633327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.633336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.633632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.633640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.633749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.633756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.633933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.419 [2024-12-05 21:24:36.633941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.419 qpair failed and we were unable to recover it. 00:31:35.419 [2024-12-05 21:24:36.634156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.634164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.634347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.634356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.634569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.634577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.634886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.634894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.635214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.635222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.635435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.635443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.635762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.635771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.636108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.636117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.636431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.636439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.636750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.636758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.637091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.637101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.637409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.637417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.637712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.637720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.637942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.637950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.638230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.638237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.638584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.638592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.638905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.638914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.639262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.639270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.639468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.639476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.639704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.639712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.639925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.639933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.640237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.640245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.640421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.640430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.640743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.640751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.640943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.640952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.641300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.641308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.641562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.641570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.641752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.641761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.642136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.642144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.642434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.642441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.642593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.642601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.642916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.642924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.643203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.643211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.643508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.643516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.643784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.643792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.644017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.644025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.644298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.644306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.644485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.644494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.644802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.644811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.645093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.645101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.645409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.645417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.645721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.645729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.645936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.645945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.646287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.646295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.646501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.646509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.646691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.646699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.646873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.646881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.647073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.647082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.647398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.647406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.647522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.647530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.647854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.647867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.648074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.648082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.648411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.648419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.648612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.648620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.648713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.648719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.649063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.649071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.649378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.649386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.649705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.649714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.650015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.650024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.420 qpair failed and we were unable to recover it. 00:31:35.420 [2024-12-05 21:24:36.650345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.420 [2024-12-05 21:24:36.650354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.650567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.650575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.650854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.650864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.651038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.651046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.651269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.651276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.651585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.651593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.651902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.651911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.652206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.652214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.652526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.652534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.652611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.652618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.652868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.652877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.653092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.653100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.653407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.653415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.653708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.653717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.653921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.653930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.654211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.654219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.654393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.654403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.654674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.654681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.655018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.655026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.655332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.655341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.655640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.655648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.655941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.655949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.656276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.656284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.656588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.656596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.656783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.656792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.657143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.657151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.657485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.657493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.657660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.657669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.657892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.657901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.658115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.658123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.658533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.658542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.658843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.658853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.659168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.659176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.659497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.659505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.659830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.659839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.660084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.660092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.660418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.660427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.660613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.660622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.660878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.660887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.661231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.661239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.661542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.661551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.661877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.661887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.662208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.662215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.662480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.662489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.662695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.662703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.662969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.662977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.663294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.663301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.663631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.663640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.663893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.663901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.664199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.664207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.664520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.664528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.664730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.664738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.665064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.665073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.665394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.665402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.665713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.665722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.666063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.666072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.666389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.666397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.666709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.666717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.667039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.667047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.667397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.667405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.667697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.667706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.421 qpair failed and we were unable to recover it. 00:31:35.421 [2024-12-05 21:24:36.667910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.421 [2024-12-05 21:24:36.667918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.668208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.668215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.668532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.668540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.668922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.668931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.669241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.669249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.669563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.669571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.669876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.669884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.670202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.670210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.670249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.670256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.670438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.670446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.670735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.670745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.671049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.671058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.671358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.671366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.671676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.671684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.671997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.672005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.672333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.672341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.672675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.672683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.672984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.672993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.673311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.673320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.673634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.673643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.673970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.673979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.674305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.674314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.674663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.674672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.674973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.674981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.675313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.675321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.675615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.675623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.675808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.675817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.676127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.676136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.676340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.676348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.676665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.676674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.676975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.676983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.677167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.677176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.677464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.677472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.677784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.677792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.678105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.678114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.678380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.678388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.678747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.678755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.679123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.679131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.679432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.679440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.679748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.679756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.680047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.680055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.680368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.680376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.680678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.680687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.680890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.680899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.681283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.681290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.681500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.681509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.681820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.681828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.682201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.682209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.682476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.682483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.682794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.682802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.683103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.683125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.683448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.683456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.683812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.683821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.684120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.684129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.684448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.684456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.684666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.422 [2024-12-05 21:24:36.684673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.422 qpair failed and we were unable to recover it. 00:31:35.422 [2024-12-05 21:24:36.684921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.684930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.685260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.685268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.685396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.685404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.685702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.685710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.686006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.686014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.686202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.686210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.686482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.686490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.686796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.686804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.687118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.687126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.687311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.687319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.687633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.687640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.687814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.687823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.688113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.688121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.688423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.688431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.688783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.688791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.689100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.689108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.689389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.689397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.689719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.689728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.690074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.690082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.690393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.690401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.690735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.690743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.691050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.691058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.691231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.691240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.691578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.691586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.691871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.691880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.692194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.692202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.692513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.692521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.692830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.692838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.693167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.693175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.693486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.693494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.693801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.693808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.694134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.694143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.694463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.694472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.694807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.694816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.695103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.695113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.695432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.695440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.695772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.695779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.696084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.696092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.696317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.696325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.696505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.696514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.696792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.696800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.697139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.697147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.697334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.697343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.697649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.697658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.698002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.698011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.698276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.698284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.698634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.698642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.698938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.698946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.699167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.699174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.699414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.699422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.699732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.699740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.700043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.700052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.700377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.700385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.700696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.700705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.700911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.700919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.701239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.701247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.701551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.701558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.701876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.701884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.702199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.702207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.702417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.423 [2024-12-05 21:24:36.702425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.423 qpair failed and we were unable to recover it. 00:31:35.423 [2024-12-05 21:24:36.702709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.702718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.703043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.703052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.703223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.703232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.703584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.703592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.703905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.703913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.704240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.704249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.704578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.704586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.704888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.704896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.705247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.705255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.705419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.705427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.705646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.705654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.705979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.705987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.706325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.706333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.706672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.706680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.706908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.706919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.707136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.707145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.707397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.707405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.707692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.707700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.708017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.708026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.708244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.708253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.708553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.708562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.708887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.708896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.708970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.708980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.709249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.709257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.709626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.709635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.709798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.709807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.710121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.710129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.710440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.710448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.710760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.710768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.710940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.710948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.711334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.711342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.711553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.711561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.711940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.711948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.712176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.712184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.712398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.712406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.712506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.712513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.712817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.712825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.713158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.713166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.713459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.713467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.713794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.713802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.714054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.714062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.714362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.714371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.714708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.714716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.715112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.715122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.715422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.715430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.715735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.715742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.716081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.716089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.716389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.716397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.716727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.716735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.717053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.717061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.717300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.717308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.717703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.717711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.718023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.718032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.718326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.718333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.718641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.718651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.718962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.718970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.719308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.719315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.719512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.719520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.719740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.719749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.720067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.720075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.424 [2024-12-05 21:24:36.720339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.424 [2024-12-05 21:24:36.720347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.424 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.720640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.720647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.720852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.720860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.721183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.721191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.721450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.721458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.721799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.721806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.722136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.722144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.722460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.722469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.722781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.722790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.723000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.723008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.723189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.723198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.723507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.723514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.723818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.723827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.724067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.724075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.724271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.724279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.724600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.724608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.725013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.725021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.725361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.725370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.725671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.725679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.725872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.725881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.726117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.726125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.726287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.726297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.726551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.726559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.726896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.726904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.727089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.727098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.727174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.727182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.727484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.727492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.727805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.727813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.728156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.728165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.728434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.728442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.728774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.728782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.729131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.729139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.729451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.729459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.729762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.729771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.730109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.730120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.730445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.730454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.730757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.730765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.730952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.730961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.731359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.731367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.731571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.731579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.731754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.731762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.732099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.732107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.732479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.732487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.732796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.732805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.733146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.733154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.733463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.733471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.733792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.733800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.733996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.734005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.734331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.734339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.734653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.734661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.734949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.734958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.735147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.735155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.735479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.735488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.735791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.735800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.736208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.425 [2024-12-05 21:24:36.736216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.425 qpair failed and we were unable to recover it. 00:31:35.425 [2024-12-05 21:24:36.736546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.736554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.736742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.736750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.736957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.736965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.737306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.737314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.737499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.737508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.737710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.737718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.737928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.737937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.738223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.738231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.738536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.738544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.738850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.738858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.739134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.739143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.739440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.739448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.739700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.739708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.739918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.739926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.740125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.740133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.740479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.740486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.740685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.740694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.740775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.740783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.741143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.741151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.741349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.741357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.741664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.741672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.741897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.741906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.742146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.742154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.742358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.742367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.742695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.742703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.743020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.743028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.743350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.743358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.743660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.743667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.743936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.743945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.744171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.744180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.744543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.744552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.744759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.744767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.745157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.745165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.745356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.745364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.745624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.745633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.745841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.745849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.746174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.746182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.746461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.746469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.746778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.746786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.747112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.747121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.747459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.747467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.747683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.747690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.748064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.748073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.748373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.748381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.748671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.748679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.748979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.748987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.749177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.749188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.749538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.749546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.749873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.749881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.750198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.750205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.750514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.750521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.750852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.750860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.751186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.751193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.751508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.751516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.751846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.751855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.752074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.752083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.752406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.752415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.752598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.752608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.752812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.752821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.426 [2024-12-05 21:24:36.753027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.426 [2024-12-05 21:24:36.753036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.426 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.753371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.753379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.753595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.753603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.753909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.753917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.754275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.754283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.754616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.754624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.754921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.754930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.755210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.755218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.755545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.755554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.755854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.755865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.756035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.756044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.756389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.756397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.756687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.756695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.757004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.757013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.757317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.757325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.757505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.757513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.757607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.757614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.757809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.757818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.758155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.758163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.758454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.758462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.758773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.758781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.759096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.759104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.759425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.759433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.759750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.759759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.760053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.760062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.760264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.760273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.760417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.760424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.760577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.760587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.760802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.760809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.761034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.761042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.761365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.761372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.761682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.761691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.761913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.761921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.762227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.762235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.762604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.762613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.763024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.763032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.763203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.763211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.763510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.763518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.763818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.763826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.764017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.764025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.764278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.764286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.764601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.764610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.764893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.764902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.765220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.765227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.765551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.765559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.765879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.765887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.766159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.766167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.766468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.766475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.766786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.766793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.766982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.766991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.767286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.767293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.767624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.767632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.767856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.767867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.768073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.768081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.768401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.768409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.768706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.768714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.768972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.768980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.769320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.769328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.769679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.769688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.769901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.427 [2024-12-05 21:24:36.769910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.427 qpair failed and we were unable to recover it. 00:31:35.427 [2024-12-05 21:24:36.770104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.770112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.770425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.770433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.770715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.770723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.771041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.771050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.771365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.771373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.771692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.771700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.772020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.772029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.772368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.772377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.772567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.772576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.772888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.772897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.773240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.773248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.773543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.773551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.773865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.773873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.774191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.774199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.774528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.774535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.774873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.774882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.775093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.775101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.775432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.775440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.775629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.775637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.775939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.775946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.776163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.776171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.776452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.776460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.776782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.776790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.777089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.777097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.777387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.777395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.777706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.777715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.778020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.778029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.778324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.778331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.778670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.778678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.778895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.778903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.779115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.779125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.779407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.779415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.779599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.779608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.779936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.779945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.780137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.780145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.780436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.780444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.780739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.780747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.781038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.781046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.781362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.781370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.781652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.781660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.781781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.781789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.782072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.782080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.782276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.782284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.782594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.782601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.782903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.782911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.783226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.783234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.783552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.783560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.783874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.783884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.784239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.784247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.784576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.784584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.784852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.784860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.785188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.785196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.785524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.785532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.785637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.785644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.785905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.785913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.428 [2024-12-05 21:24:36.786147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.428 [2024-12-05 21:24:36.786156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.428 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.786454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.786462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.786623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.786632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.786856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.786867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.787149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.787157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.787486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.787493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.787813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.787822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.788117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.788126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.788442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.788450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.788653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.788661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.788947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.788955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.789269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.789277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.789485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.789494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.789779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.789787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.790113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.790122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.790311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.790319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.790613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.790621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.790838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.790847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.791265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.791273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.791561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.791570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.791888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.791897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.792241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.792250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.792557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.792565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.792851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.792859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.793187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.793195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.793505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.793513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.793813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.793821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.794153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.794161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.794454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.794462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.794758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.794766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.795090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.795099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.795316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.795325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.795624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.795634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.795849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.795856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.796183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.796191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.796400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.796408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.796741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.796748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.797057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.797066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.797396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.797404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.797582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.797591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.797905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.797914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.797992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.798000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.798277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.798284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.798573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.798581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.798756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.798765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.798870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.798878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.799180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.799188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.799503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.799511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.799798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.799806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.799924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.799932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.800218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.800226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.800524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.800532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.800841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.800850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.801162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.801171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.801462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.801470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.801781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.801789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.802111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.802119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.802445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.802454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.802624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.802633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.429 [2024-12-05 21:24:36.802810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.429 [2024-12-05 21:24:36.802819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.429 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.803195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.803204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.803512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.803521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.803734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.803742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.804026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.804034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.804240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.804249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.804452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.804460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.804746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.804754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.804981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.804990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.805314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.805322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.805634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.805643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.805941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.805950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.806262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.806270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.806578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.806588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.806898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.806907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.807143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.807151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.807456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.807463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.807772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.807780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.808092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.808100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.808273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.808283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.808578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.808586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.808805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.808813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.809010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.809019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.809354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.809362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.809587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.809595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.809757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.809765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.810057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.810065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.810277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.810285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.810519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.810527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.810878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.810886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.811211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.811219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.811509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.811518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.811821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.811829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.812063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.812072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.812394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.812402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.812711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.812719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.812871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.812880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.813186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.813195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.813346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.813355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.813667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.813675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.813854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.813872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.814255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.814264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.814447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.814456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.814716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.814724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.815041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.815049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.815258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.815266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.815458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.815467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.815764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.815773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.815974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.815983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.816124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.816132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.816380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.816388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.816675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.816683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.816790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.816797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.817128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.817137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.817472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.817480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.817669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.817677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.817991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.817999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.818290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.818298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.818582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.818590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.818783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.430 [2024-12-05 21:24:36.818791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.430 qpair failed and we were unable to recover it. 00:31:35.430 [2024-12-05 21:24:36.819080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.819089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.819340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.819347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.819534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.819543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.819854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.819870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.820091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.820100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.820391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.820400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.820642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.820649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.820970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.820978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.821366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.821374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.821678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.821686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.821994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.822002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.822335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.822342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.822648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.822656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.822992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.823001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.823235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.823244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.823508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.823516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.823808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.823816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.824155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.824163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.824460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.824469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.824674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.824683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.824908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.824917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.825269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.825277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.825564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.825572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.825897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.825905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.826136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.826144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.826448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.826456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.826646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.826654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.826956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.826964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.827293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.827301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.827617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.827625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.431 [2024-12-05 21:24:36.827820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.431 [2024-12-05 21:24:36.827829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.431 qpair failed and we were unable to recover it. 00:31:35.708 [2024-12-05 21:24:36.828036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.708 [2024-12-05 21:24:36.828046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.708 qpair failed and we were unable to recover it. 00:31:35.708 [2024-12-05 21:24:36.828359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.708 [2024-12-05 21:24:36.828369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.708 qpair failed and we were unable to recover it. 00:31:35.708 [2024-12-05 21:24:36.828675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.708 [2024-12-05 21:24:36.828685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.708 qpair failed and we were unable to recover it. 00:31:35.708 [2024-12-05 21:24:36.828984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.708 [2024-12-05 21:24:36.828993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.708 qpair failed and we were unable to recover it. 00:31:35.708 [2024-12-05 21:24:36.829263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.708 [2024-12-05 21:24:36.829270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.708 qpair failed and we were unable to recover it. 00:31:35.708 [2024-12-05 21:24:36.829574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.708 [2024-12-05 21:24:36.829583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.708 qpair failed and we were unable to recover it. 00:31:35.708 [2024-12-05 21:24:36.829895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.708 [2024-12-05 21:24:36.829903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.708 qpair failed and we were unable to recover it. 00:31:35.708 [2024-12-05 21:24:36.830181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.708 [2024-12-05 21:24:36.830189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.708 qpair failed and we were unable to recover it. 00:31:35.708 [2024-12-05 21:24:36.830509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.708 [2024-12-05 21:24:36.830518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.708 qpair failed and we were unable to recover it. 00:31:35.708 [2024-12-05 21:24:36.830821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.708 [2024-12-05 21:24:36.830828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.708 qpair failed and we were unable to recover it. 00:31:35.708 [2024-12-05 21:24:36.831103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.708 [2024-12-05 21:24:36.831111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.708 qpair failed and we were unable to recover it. 00:31:35.708 [2024-12-05 21:24:36.831422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.831430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.831610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.831619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.831751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.831759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.832086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.832095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.832406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.832414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.832704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.832712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.832885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.832893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.833252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.833259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.833571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.833579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.833853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.833864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.834173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.834182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.834482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.834490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.834822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.834831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.835038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.835048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.835344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.835352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.835563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.835572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.835883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.835892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.836177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.836185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.836496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.836504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.836827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.836835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.837149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.837157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.837482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.837490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.837662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.837671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.837951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.837958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.838268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.838275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.838561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.838569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.838888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.838896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.839170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.839178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.839447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.839455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.839739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.839747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.839956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.839965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.840247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.840257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.840535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.840543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.840829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.840838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.841158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.841167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.841517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.841525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.841744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.841752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.842074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.709 [2024-12-05 21:24:36.842082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.709 qpair failed and we were unable to recover it. 00:31:35.709 [2024-12-05 21:24:36.842430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.842438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.842624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.842633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.842831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.842839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.843116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.843124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.843402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.843410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.843677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.843685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.843875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.843885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.844302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.844310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.844565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.844573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.844735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.844744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.844969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.844977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.845310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.845319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.845613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.845621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.845889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.845897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.846143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.846151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.846455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.846463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.846758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.846767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.847084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.847093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.847417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.847425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.847730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.847738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.848133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.848142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.848460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.848467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.848772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.848780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.849093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.849101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.849415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.849422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.849733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.849742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.850114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.850123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.850290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.850299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.850610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.850619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.850924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.850933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.851132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.851140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.851446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.851453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.851649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.851657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.851923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.851933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.852278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.852287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.852660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.852668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.852987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.852995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.853108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.853115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.710 [2024-12-05 21:24:36.853410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.710 [2024-12-05 21:24:36.853418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.710 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.853740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.853749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.854088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.854096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.854386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.854394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.854595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.854603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.854764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.854773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.855068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.855077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.855374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.855383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.855731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.855739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.856057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.856065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.856370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.856378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.856794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.856803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.857104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.857113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.857477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.857485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.857773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.857781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.857971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.857979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.858153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.858161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.858467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.858475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.858767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.858775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.859066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.859075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.859392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.859399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.859572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.859581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.859864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.859873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.860223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.860231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.860441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.860450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.860619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.860628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.860808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.860818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.861142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.861150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.861464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.861472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.861783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.861791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.862107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.862116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.862484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.862491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.862829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.862837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.863121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.863130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.863435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.863443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.863784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.863794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.864093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.864101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.864431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.864438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.864724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.711 [2024-12-05 21:24:36.864733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.711 qpair failed and we were unable to recover it. 00:31:35.711 [2024-12-05 21:24:36.865041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.865049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.865376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.865384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.865585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.865593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.865756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.865765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.866068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.866076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.866391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.866399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.866701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.866709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.866810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.866818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.867045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.867054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.867253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.867261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.867571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.867579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.867908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.867916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.868219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.868227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.868507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.868516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.868825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.868833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.869159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.869168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.869483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.869491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.869652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.869660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.869939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.869947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.870243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.870251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.870548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.870556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.870874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.870882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.871116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.871124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.871317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.871326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.871554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.871561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.871855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.871865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.872153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.872161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.872436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.872444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.872645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.872653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.872811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.872819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.873155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.873164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.873465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.873473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.873783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.873791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.874102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.874110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.874421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.874429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.874710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.874719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.875040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.875050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.875366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.712 [2024-12-05 21:24:36.875374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.712 qpair failed and we were unable to recover it. 00:31:35.712 [2024-12-05 21:24:36.875693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.875701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.875974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.875982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.876278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.876287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.876612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.876620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.876939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.876947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.877246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.877254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.877576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.877584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.877905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.877913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.878150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.878158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.878495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.878502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.878812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.878820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.879146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.879156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.879482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.879490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.879700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.879707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.880037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.880046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.880348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.880356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.880682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.880690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.880876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.880885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.881191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.881201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.881392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.881399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.881618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.881626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.881675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.881683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.881868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.881876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.882257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.882264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.882462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.882470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.882792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.882799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.883058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.883066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.883356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.883363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.883708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.883715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.883903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.883921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.884266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.884273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.884492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.884499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.884830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.713 [2024-12-05 21:24:36.884837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.713 qpair failed and we were unable to recover it. 00:31:35.713 [2024-12-05 21:24:36.885189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.885196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.885507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.885515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.885882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.885890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.886181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.886187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.886518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.886525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.886846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.886854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.887072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.887080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.887384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.887393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.887654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.887662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.887981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.887989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.888273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.888280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.888586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.888592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.888905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.888912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.889251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.889258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.889590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.889598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.889814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.889821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.890169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.890176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.890456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.890463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.890672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.890679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.890882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.890890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.891205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.891212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.891489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.891497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.891779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.891787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.892083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.892090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.892399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.892406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.892571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.892578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.892857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.892870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.893173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.893179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.893344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.893352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.893737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.893743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.893952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.893959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.894288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.894295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.894595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.894604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.894803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.894810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.895150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.895157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.895345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.895352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.895701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.895707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.895985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.895992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.714 [2024-12-05 21:24:36.896325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.714 [2024-12-05 21:24:36.896333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.714 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.896627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.896634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.896789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.896795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.897112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.897119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.897406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.897412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.897723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.897730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.898072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.898080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.898254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.898262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.898599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.898607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.898900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.898908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.899200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.899208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.899523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.899529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.899865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.899872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.900191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.900199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.900552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.900559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.900837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.900844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.901187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.901195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.901485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.901492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.901797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.901805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.901991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.901997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.902333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.902340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.902631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.902638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.902925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.902933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.903259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.903266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.903575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.903582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.903905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.903912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.904196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.904203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.904512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.904519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.904806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.904812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.905076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.905083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.905281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.905289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.905576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.905583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.905797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.905804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.906103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.906111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.906434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.906443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.906605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.906614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.906899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.906908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.907108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.907115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.907429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.715 [2024-12-05 21:24:36.907436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.715 qpair failed and we were unable to recover it. 00:31:35.715 [2024-12-05 21:24:36.907765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.907772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.908079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.908086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.908378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.908384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.908712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.908719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.909008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.909016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.909305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.909312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.909631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.909639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.909934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.909942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.910247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.910254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.910559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.910566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.910892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.910899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.911217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.911223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.911563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.911570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.911859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.911870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.912213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.912219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.912537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.912543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.912832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.912839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.913158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.913166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.913456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.913464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.913658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.913666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.913940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.913947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.914255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.914262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.914550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.914557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.914873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.914880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.915084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.915091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.915381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.915388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.915574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.915583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.915860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.915877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.916068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.916075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.916368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.916375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.916704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.916712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.917020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.917027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.917203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.917211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.917537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.917544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.917923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.917931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.918308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.918316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.918602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.918609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.918910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.716 [2024-12-05 21:24:36.918917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.716 qpair failed and we were unable to recover it. 00:31:35.716 [2024-12-05 21:24:36.919263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.919270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.919576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.919582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.919898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.919906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.920241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.920248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.920549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.920556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.920773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.920780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.921147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.921154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.921447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.921454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.921779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.921786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.922064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.922071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.922276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.922284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.922588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.922595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.922759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.922767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.922982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.922990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.923314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.923321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.923511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.923519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.923825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.923832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.924025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.924033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.924335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.924342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.924586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.924593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.924893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.924900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.925180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.925187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.925465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.925473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.925764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.925772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.926078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.926085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.926369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.926377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.926695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.926702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.927004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.927011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.927299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.927306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.927625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.927632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.927945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.927952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.928280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.928287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.928595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.928603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.928921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.717 [2024-12-05 21:24:36.928929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.717 qpair failed and we were unable to recover it. 00:31:35.717 [2024-12-05 21:24:36.929270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.929278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.929584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.929591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.929966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.929973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.930304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.930313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.930596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.930603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.930895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.930903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.931249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.931256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.931547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.931553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.931743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.931750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.932124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.932132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.932465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.932473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.932647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.932655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.932964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.932972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.933270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.933277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.933587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.933594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.933910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.933917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.934235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.934242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.934559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.934567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.934905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.934913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.935189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.935195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.935505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.935512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.935814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.935821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.936118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.936125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.936436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.936442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.936613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.936620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.936888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.936896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.937164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.937171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.937484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.937491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.937721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.937729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.937995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.938003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.938308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.938315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.938608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.938615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.938824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.938831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.939136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.939143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.939328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.939335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.939553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.939561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.939871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.939878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.940207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.940214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.718 [2024-12-05 21:24:36.940500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.718 [2024-12-05 21:24:36.940508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.718 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.940837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.940844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.941182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.941190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.941489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.941497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.941795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.941802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.942115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.942124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.942412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.942418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.942704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.942711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.942884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.942891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.943102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.943109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.943311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.943325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.943589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.943596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.943811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.943818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.944113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.944120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.944430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.944436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.944759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.944766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.945065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.945072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.945378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.945385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.945601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.945608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.945925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.945933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.946140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.946147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.946476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.946484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.946759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.946767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.946964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.946971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.947297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.947304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.947489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.947496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.947879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.947887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.948187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.948195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.948494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.948501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.948834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.948840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.949153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.949160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.949474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.949481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.949798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.949807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.950015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.950023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.950437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.950444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.950751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.950757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.951077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.951084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.951388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.951394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.719 [2024-12-05 21:24:36.951710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.719 [2024-12-05 21:24:36.951717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.719 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.952026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.952034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.952361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.952367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.952700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.952708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.953011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.953019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.953201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.953209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.953513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.953521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.953822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.953831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.954027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.954035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.954320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.954326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.954536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.954543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.954853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.954864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.955150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.955157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.955357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.955364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.955573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.955580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.955892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.955900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.956223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.956231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.956564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.956571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.956856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.956867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.957179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.957186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.957524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.957530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.957873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.957881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.958061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.958070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.958355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.958362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.958574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.958581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.958885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.958892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.959232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.959239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.959518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.959524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.959828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.959834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.960043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.960050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.960211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.960219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.960514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.960521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.960838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.960844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.961136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.961143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.961463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.961470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.961760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.961768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.962101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.962109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.962419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.962426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.962734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.720 [2024-12-05 21:24:36.962741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.720 qpair failed and we were unable to recover it. 00:31:35.720 [2024-12-05 21:24:36.963036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.963043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.963222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.963229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.963593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.963600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.963911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.963918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.964221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.964228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.964511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.964518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.964812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.964819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.965150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.965158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.965369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.965378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.965693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.965700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.965992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.965999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.966334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.966340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.966640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.966648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.966813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.966821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.967112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.967120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.967469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.967475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.967791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.967797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.968081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.968088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.968375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.968382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.968586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.968593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.968905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.968912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.969204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.969212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.969523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.969531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.969822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.969829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.970129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.970136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.970446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.970453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.970748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.970755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.971058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.971066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.971367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.971374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.971684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.971691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.971992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.971999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.972164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.972172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.972432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.972440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.972761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.972768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.973075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.973083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.973372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.973379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.973703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.973711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.974032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.974039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.721 [2024-12-05 21:24:36.974236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.721 [2024-12-05 21:24:36.974243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.721 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.974623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.974630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.974847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.974854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.975195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.975203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.975491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.975498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.975692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.975699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.975969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.975976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.976241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.976249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.976543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.976550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.976842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.976849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.977140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.977149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.977476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.977483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.977759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.977766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.978000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.978007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.978297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.978305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.978618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.978626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.978953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.978960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.979258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.979265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.979540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.979546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.979880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.979888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.980175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.980183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.980533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.980540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.980727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.980735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.981135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.981143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.981472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.981480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.981892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.981900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.982103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.982110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.982457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.982464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.982777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.982784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.983098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.983105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.983273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.983281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.983552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.983559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.983844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.983851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.984147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.984154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.984439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.722 [2024-12-05 21:24:36.984447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.722 qpair failed and we were unable to recover it. 00:31:35.722 [2024-12-05 21:24:36.984764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.984771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.985074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.985081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.985365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.985372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.985653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.985660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.985985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.985992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.986317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.986324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.986650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.986656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.986938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.986945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.987310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.987317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.987604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.987610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.987905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.987912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.988076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.988083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.988365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.988373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.988670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.988677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.988975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.988983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.989305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.989316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.989616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.989624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.989938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.989945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.990259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.990266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.990599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.990606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.990886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.990893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.991197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.991204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.991489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.991496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.991652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.991660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.991836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.991843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.992139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.992147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.992435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.992443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.992647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.992655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.992955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.992963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.993291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.993298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.993592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.993599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.993910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.993917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.994090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.994096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.994379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.994387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.994753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.994760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.995053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.995061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.995375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.995382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.995564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.995572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.723 [2024-12-05 21:24:36.995833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.723 [2024-12-05 21:24:36.995841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.723 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:36.996033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:36.996040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:36.996346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:36.996354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:36.996684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:36.996692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:36.996980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:36.996989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:36.997303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:36.997310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:36.997644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:36.997651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:36.997938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:36.997945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:36.998234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:36.998241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:36.998398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:36.998405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:36.998669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:36.998676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:36.999007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:36.999015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:36.999329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:36.999336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:36.999648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:36.999656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:36.999977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:36.999985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.000269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.000276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.000569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.000577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.000779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.000788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.001086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.001094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.001764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.001783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.002017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.002025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.002300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.002307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.002629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.002636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.002972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.002980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.003328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.003335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.003644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.003651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.004000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.004008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.004285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.004293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.004481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.004489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.004800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.004808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.005110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.005117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.005419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.005425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.005782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.005789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.006092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.006100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.006417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.006424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.006718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.006726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.007024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.007032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.007352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.007359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.724 [2024-12-05 21:24:37.007677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.724 [2024-12-05 21:24:37.007685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.724 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.008001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.008008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.008316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.008323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.008633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.008640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.008931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.008938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.009231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.009237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.009518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.009525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.009717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.009724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.010003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.010010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.010317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.010324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.010608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.010615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.010915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.010922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.011254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.011261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.011536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.011543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.011713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.011721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.012023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.012031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.012224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.012232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.012544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.012552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.012755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.012762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.013185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.013193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.013477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.013483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.013791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.013798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.014098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.014106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.014419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.014426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.014704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.014711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.015017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.015024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.015202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.015210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.015491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.015498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.015778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.015785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.016093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.016100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.016405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.016412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.016738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.016745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.017002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.017009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.017324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.017331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.017616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.017624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.017911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.017919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.018215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.018222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.018531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.725 [2024-12-05 21:24:37.018538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.725 qpair failed and we were unable to recover it. 00:31:35.725 [2024-12-05 21:24:37.018827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.018833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.019146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.019153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.019350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.019358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.019695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.019702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.020069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.020076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.020368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.020376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.020685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.020692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.020793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.020800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.021059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.021067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.021393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.021400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.021704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.021712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.022030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.022038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.022221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.022229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.022386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.022394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.022717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.022725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.023029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.023036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.023329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.023336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.023602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.023610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.023911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.023918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.024125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.024132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.024417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.024423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.024610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.024624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.024940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.024947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.025235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.025241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.025530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.025537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.025819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.025825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.726 [2024-12-05 21:24:37.026157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.726 [2024-12-05 21:24:37.026164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.726 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.026479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.026486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.026814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.026821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.026998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.027005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.027269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.027276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.027605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.027611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.027825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.027832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.028156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.028163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.028461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.028467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.028778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.028784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.029094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.029101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.029392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.029398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.029677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.029683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.030000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.030006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.030172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.030180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.030340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.030347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.030588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.030595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.030870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.030878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.031197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.031204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.031536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.031543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.031799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.031806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.032087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.032094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.032412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.032419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.032746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.032752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.033055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.033062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.033277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.033283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.033630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.033637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.033928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.033935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.034231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.727 [2024-12-05 21:24:37.034238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.727 qpair failed and we were unable to recover it. 00:31:35.727 [2024-12-05 21:24:37.034547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.034554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.034905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.034912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.035194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.035200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.035510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.035516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.035842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.035849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.036139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.036147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.036317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.036325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.036635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.036642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.036929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.036936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.037262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.037270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.037552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.037560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.037889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.037897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.038196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.038203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.038471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.038478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.038795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.038802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.039124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.039130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.039308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.039316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.039586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.039593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.039907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.039914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.040077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.040085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.040354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.040361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.040675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.040682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.040860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.040873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.041201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.041207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.041486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.041493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.041679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.041686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.041978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.041985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.042251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.042257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.042582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.042588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.728 [2024-12-05 21:24:37.042905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.728 [2024-12-05 21:24:37.042912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.728 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.043237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.043244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.043534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.043541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.043849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.043856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.044193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.044202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.044367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.044375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.044688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.044695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.044949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.044956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.045150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.045157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.045480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.045486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.045651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.045658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.046024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.046031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.046338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.046345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.046637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.046643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.046919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.046926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.047245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.047251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.047572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.047579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.047908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.047915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.048235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.048242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.048567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.048574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.048857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.048866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.049147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.049154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.049452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.049460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.049760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.049767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.050079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.050087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.050457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.050464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.050754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.050761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.051064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.051071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.051389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.729 [2024-12-05 21:24:37.051396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.729 qpair failed and we were unable to recover it. 00:31:35.729 [2024-12-05 21:24:37.051700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.051707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.052009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.052016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.052337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.052344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.052644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.052652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.053029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.053036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.053327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.053334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.053617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.053624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.053889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.053896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.054182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.054189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.054518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.054524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.054810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.054817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.055149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.055156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.055471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.055477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.055780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.055788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.056050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.056057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.056256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.056265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.056602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.056608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.056895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.056902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.057225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.057232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.057399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.057407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.057609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.057615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.057908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.057915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.058209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.058215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.058531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.058539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.058838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.058846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.059057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.059064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.059347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.059354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.059654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.059661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.059951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.059959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.060281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.730 [2024-12-05 21:24:37.060288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.730 qpair failed and we were unable to recover it. 00:31:35.730 [2024-12-05 21:24:37.060595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.060602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.060918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.060925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.061233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.061240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.061530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.061536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.061709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.061722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.062022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.062029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.062333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.062340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.062626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.062633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.062842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.062849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.063123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.063130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.063315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.063323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.063640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.063648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.063970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.063977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.064279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.064285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.064568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.064575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.064877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.064884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.065200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.065206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.065490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.065496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.065765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.065773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.065945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.065954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.066280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.066287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.066568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.066576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.066872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.066880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.067184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.067191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.067510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.067517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.067796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.067804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.068121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.068128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.068464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.068471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.068751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.068758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.069075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.069083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.069286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.069294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.069708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.731 [2024-12-05 21:24:37.069716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.731 qpair failed and we were unable to recover it. 00:31:35.731 [2024-12-05 21:24:37.070034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.070041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.070349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.070355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.070519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.070526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.070795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.070802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.071119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.071126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.071312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.071328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.071643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.071650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.071932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.071940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.072321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.072327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.072635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.072641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.072932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.072940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.073223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.073230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.073534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.073541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.073818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.073825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.074117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.074125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.074411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.074417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.074744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.074752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.075156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.075164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.075455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.075463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.075658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.075665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.075959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.075966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.076260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.076266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.076442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.076450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.076719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.076726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.077035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.077042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.732 [2024-12-05 21:24:37.077320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.732 [2024-12-05 21:24:37.077327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.732 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.077635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.077642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.077936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.077943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.078257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.078264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.078584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.078591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.078882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.078889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.079184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.079192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.079478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.079486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.079798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.079807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.080140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.080148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.080444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.080451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.080692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.080698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.080937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.080944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.081238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.081245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.081532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.081538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.081847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.081854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.082057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.082064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.082382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.082388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.082690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.082697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.083031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.083038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.083366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.083373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.083663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.083670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.083943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.083951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.084261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.084268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.084532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.084539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.084878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.084885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.085037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.085044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.085321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.085328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.085513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.085520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.085937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.085944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.733 [2024-12-05 21:24:37.086227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.733 [2024-12-05 21:24:37.086234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.733 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.086511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.086517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.086834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.086840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.087138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.087145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.087444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.087451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.087725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.087733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.088054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.088060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.088254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.088261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.088482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.088489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.088767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.088774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.089066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.089073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.089353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.089360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.089652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.089658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.089867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.089875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.090172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.090179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.090470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.090477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.090764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.090771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.091078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.091085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.091471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.091479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.091868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.091875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.092227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.092234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.092542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.092550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.092853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.092860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.093151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.093158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.093442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.093449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.093777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.093785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.094079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.094086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.094366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.094373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.094708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.094714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.094996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.095004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.734 qpair failed and we were unable to recover it. 00:31:35.734 [2024-12-05 21:24:37.095180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.734 [2024-12-05 21:24:37.095188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.095567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.095574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.095777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.095784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.096008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.096015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.096290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.096297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.096591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.096598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.096938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.096945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.097233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.097241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.097551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.097558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.097852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.097859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.098139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.098146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.098461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.098468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.098777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.098784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.099077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.099084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.099376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.099382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.099670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.099677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.099962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.099969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.100268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.100275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.100569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.100575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.100746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.100754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.101012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.101019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.101323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.101331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.101633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.101640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.101942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.101950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.102263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.102269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.102584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.102591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.102872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.102879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.103188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.103195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.103474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.735 [2024-12-05 21:24:37.103482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.735 qpair failed and we were unable to recover it. 00:31:35.735 [2024-12-05 21:24:37.103793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.103800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.104052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.104060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.104375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.104383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.104695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.104701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.104980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.104987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.105244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.105251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.105514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.105522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.105823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.105831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.106223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.106230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.106450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.106458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.106788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.106796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.107096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.107102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.107403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.107410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.107701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.107707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.107877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.107884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Write completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Write completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Write completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Write completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Write completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Write completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Write completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Write completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Read completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Write completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 Write completed with error (sct=0, sc=8) 00:31:35.736 starting I/O failed 00:31:35.736 [2024-12-05 21:24:37.108621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:35.736 [2024-12-05 21:24:37.109036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.109094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.109467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.109500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.109825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.109856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.736 [2024-12-05 21:24:37.110106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.736 [2024-12-05 21:24:37.110116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.736 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.110398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.110406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.110705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.110713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.111008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.111017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.111341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.111350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.111663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.111672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.111959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.111967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.112285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.112293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.112555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.112563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.112873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.112881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.113068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.113076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.113353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.113360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.113397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.113404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.113687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.113695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.113876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.113886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.114169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.114176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.114393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.114401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.114645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.114653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.114974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.114982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.115234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.115241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.115570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.115577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.115868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.115877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.116067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.116075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.116246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.116253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.116551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.116559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.116871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.116879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.117186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.117193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.117470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.117477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.117786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.737 [2024-12-05 21:24:37.117794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.737 qpair failed and we were unable to recover it. 00:31:35.737 [2024-12-05 21:24:37.118130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.118138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.118444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.118452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.118745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.118753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.119030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.119038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.119195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.119204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.119466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.119475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.119768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.119776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.120075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.120083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.120417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.120425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.120752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.120760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.120970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.120978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.121318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.121326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.121527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.121535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.121831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.121839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.122141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.122149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.122448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.122456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.122762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.122770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.123116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.123125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.123407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.123415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.123700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.123708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.124001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.124010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.124193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.124202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.124377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.124386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.738 qpair failed and we were unable to recover it. 00:31:35.738 [2024-12-05 21:24:37.124667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.738 [2024-12-05 21:24:37.124676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.739 qpair failed and we were unable to recover it. 00:31:35.739 [2024-12-05 21:24:37.124985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.739 [2024-12-05 21:24:37.124993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.739 qpair failed and we were unable to recover it. 00:31:35.739 [2024-12-05 21:24:37.125295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.739 [2024-12-05 21:24:37.125304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.739 qpair failed and we were unable to recover it. 00:31:35.739 [2024-12-05 21:24:37.125617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.739 [2024-12-05 21:24:37.125625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.739 qpair failed and we were unable to recover it. 00:31:35.739 [2024-12-05 21:24:37.125933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.739 [2024-12-05 21:24:37.125940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.739 qpair failed and we were unable to recover it. 00:31:35.739 [2024-12-05 21:24:37.126252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.739 [2024-12-05 21:24:37.126260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.739 qpair failed and we were unable to recover it. 00:31:35.739 [2024-12-05 21:24:37.126455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.739 [2024-12-05 21:24:37.126463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.739 qpair failed and we were unable to recover it. 00:31:35.739 [2024-12-05 21:24:37.126734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.739 [2024-12-05 21:24:37.126741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.739 qpair failed and we were unable to recover it. 00:31:35.739 [2024-12-05 21:24:37.127048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.739 [2024-12-05 21:24:37.127056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.739 qpair failed and we were unable to recover it. 00:31:35.739 [2024-12-05 21:24:37.127368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.739 [2024-12-05 21:24:37.127377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.739 qpair failed and we were unable to recover it. 00:31:35.739 [2024-12-05 21:24:37.127675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.739 [2024-12-05 21:24:37.127684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.739 qpair failed and we were unable to recover it. 00:31:35.739 [2024-12-05 21:24:37.127991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.739 [2024-12-05 21:24:37.127999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.739 qpair failed and we were unable to recover it. 00:31:35.739 [2024-12-05 21:24:37.128260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.739 [2024-12-05 21:24:37.128268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:35.739 qpair failed and we were unable to recover it. 00:31:36.015 [2024-12-05 21:24:37.128498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.015 [2024-12-05 21:24:37.128508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.015 qpair failed and we were unable to recover it. 00:31:36.015 [2024-12-05 21:24:37.128835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.015 [2024-12-05 21:24:37.128845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.015 qpair failed and we were unable to recover it. 00:31:36.015 [2024-12-05 21:24:37.129170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.015 [2024-12-05 21:24:37.129178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.015 qpair failed and we were unable to recover it. 00:31:36.015 [2024-12-05 21:24:37.129509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.015 [2024-12-05 21:24:37.129517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.015 qpair failed and we were unable to recover it. 00:31:36.015 [2024-12-05 21:24:37.129814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.015 [2024-12-05 21:24:37.129822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.015 qpair failed and we were unable to recover it. 00:31:36.015 [2024-12-05 21:24:37.130093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.015 [2024-12-05 21:24:37.130101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.015 qpair failed and we were unable to recover it. 00:31:36.015 [2024-12-05 21:24:37.130393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.015 [2024-12-05 21:24:37.130400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.015 qpair failed and we were unable to recover it. 00:31:36.015 [2024-12-05 21:24:37.130578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.015 [2024-12-05 21:24:37.130586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.015 qpair failed and we were unable to recover it. 00:31:36.015 [2024-12-05 21:24:37.130912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.015 [2024-12-05 21:24:37.130921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.015 qpair failed and we were unable to recover it. 00:31:36.015 [2024-12-05 21:24:37.131249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.015 [2024-12-05 21:24:37.131256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.015 qpair failed and we were unable to recover it. 00:31:36.015 [2024-12-05 21:24:37.131566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.015 [2024-12-05 21:24:37.131574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.015 qpair failed and we were unable to recover it. 00:31:36.015 [2024-12-05 21:24:37.131873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.015 [2024-12-05 21:24:37.131883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.015 qpair failed and we were unable to recover it. 00:31:36.015 [2024-12-05 21:24:37.132185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.015 [2024-12-05 21:24:37.132193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.015 qpair failed and we were unable to recover it. 00:31:36.015 [2024-12-05 21:24:37.132350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.015 [2024-12-05 21:24:37.132358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.015 qpair failed and we were unable to recover it. 00:31:36.015 [2024-12-05 21:24:37.132696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.132704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.133028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.133036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.133335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.133343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.133531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.133539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.133857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.133874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.134157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.134165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.134461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.134469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.134669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.134678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.134967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.134976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.135282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.135291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.135591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.135599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.135860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.135872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.136183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.136192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.136522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.136531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.136789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.136798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.137062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.137072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.137355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.137363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.137710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.137718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.138018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.138026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.138335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.138343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.138636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.138643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.138971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.138980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.139294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.139302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.139607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.139615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.139912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.139921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.140258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.140266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.140575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.140582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.140887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.140896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.016 [2024-12-05 21:24:37.141203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.016 [2024-12-05 21:24:37.141211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.016 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.141401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.141410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.141729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.141737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.142029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.142037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.142195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.142204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.142508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.142516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.142826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.142834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.143146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.143155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.143438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.143446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.143726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.143735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.144074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.144082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.144385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.144393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.144674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.144681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.144961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.144969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.145270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.145279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.145544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.145552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.145886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.145894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.146183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.146191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.146496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.146504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.146816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.146824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.147122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.147131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.147410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.147418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.147718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.147726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.148037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.148045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.148337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.148345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.148516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.148525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.148831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.148839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.149145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.149153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.149434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.149442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.149723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.017 [2024-12-05 21:24:37.149731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.017 qpair failed and we were unable to recover it. 00:31:36.017 [2024-12-05 21:24:37.149886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.149895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.150204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.150212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.150493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.150501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.150790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.150798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.151108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.151117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.151423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.151431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.151735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.151743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.152046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.152054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.152230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.152238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.152569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.152578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.152912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.152920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.153227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.153235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.153542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.153550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.153848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.153856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.154136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.154144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.154427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.154436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.154748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.154756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.155071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.155080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.155356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.155364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.155675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.155682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.155884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.155893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.156179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.156187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.156469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.156477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.156757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.156765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.157024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.157034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.157338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.157346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.157656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.157664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.158004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.158012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.018 qpair failed and we were unable to recover it. 00:31:36.018 [2024-12-05 21:24:37.158333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.018 [2024-12-05 21:24:37.158341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.158668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.158676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.158956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.158964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.159297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.159305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.159616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.159624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.159923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.159931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.160223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.160231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.160555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.160564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.160870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.160879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.161164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.161172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.161338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.161347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.161610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.161619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.161832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.161840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.162129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.162138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.162414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.162422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.162732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.162741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.163053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.163061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.163266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.163274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.163580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.163588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.163869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.163878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.164179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.164189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.164464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.164475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.164779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.164788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.165082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.165091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.165452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.165461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.165722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.165729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.165900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.165909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.166085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.019 [2024-12-05 21:24:37.166093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.019 qpair failed and we were unable to recover it. 00:31:36.019 [2024-12-05 21:24:37.166439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.166448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.166752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.166760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.167030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.167038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.167352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.167360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.167671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.167680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.167853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.167864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.168161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.168169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.168513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.168522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.168827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.168836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.169177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.169185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.169492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.169500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.169776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.169784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.169963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.169972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.170264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.170271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.170449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.170458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.170710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.170718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.171026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.171034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.171345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.171353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.171642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.171650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.171930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.171938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.172246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.172254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.172560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.172568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.172852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.172860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.173153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.173161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.173473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.173482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.173781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.173789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.174097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.174105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.174399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.174407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.020 qpair failed and we were unable to recover it. 00:31:36.020 [2024-12-05 21:24:37.174719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.020 [2024-12-05 21:24:37.174727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.175029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.175037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.175339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.175347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.175623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.175631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.175939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.175948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.176250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.176258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.176545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.176554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.176896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.176904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.177225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.177233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.177537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.177546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.177871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.177879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.178189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.178197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.178494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.178502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.178832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.178840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.179168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.179176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.179458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.179466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.179773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.179782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.179986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.179995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.180306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.180315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.180645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.180653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.180953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.180962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.181228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.181235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.181387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.181395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.181697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.181705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.021 qpair failed and we were unable to recover it. 00:31:36.021 [2024-12-05 21:24:37.181956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.021 [2024-12-05 21:24:37.181964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.182263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.182271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.182563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.182571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.182859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.182874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.183140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.183149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.183458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.183466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.183790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.183799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.184110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.184118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.184422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.184431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.184698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.184707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.185044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.185052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.185270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.185278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.185474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.185483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.185779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.185787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.185962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.185971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.186289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.186297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.186596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.186604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.186756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.186765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.187059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.187067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.187362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.187370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.187677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.187685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.187985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.187993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.188273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.188282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.188561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.188570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.188877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.188886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.189185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.189193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.189504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.189512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.189835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.189843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.190173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.190182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.190486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.190495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.190823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.022 [2024-12-05 21:24:37.190831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.022 qpair failed and we were unable to recover it. 00:31:36.022 [2024-12-05 21:24:37.191117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.191125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.191423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.191430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.191736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.191744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.192069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.192078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.192378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.192387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.192694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.192704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.192972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.192980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.193292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.193299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.193456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.193465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.193741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.193750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.194080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.194088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.194366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.194374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.194701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.194709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.195017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.195025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.195290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.195298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.195632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.195641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.195973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.195981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.196280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.196288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.196597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.196606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.196894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.196902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.197119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.197127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.197395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.197403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.197700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.197708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.197987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.197995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.198180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.198189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.198489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.198497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.198802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.198810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.199114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.199123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.023 [2024-12-05 21:24:37.199457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.023 [2024-12-05 21:24:37.199464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.023 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.199772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.199781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.200082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.200090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.200390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.200398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.200722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.200730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.201073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.201082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.201378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.201385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.201667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.201675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.201955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.201963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.202295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.202302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.202605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.202613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.202927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.202935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.203198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.203206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.203522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.203531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.203829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.203838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.204062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.204071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.204369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.204376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.204690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.204700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.204908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.204916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.205106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.205114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.205404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.205411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.205674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.205682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.205981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.205989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.206327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.206335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.206644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.206652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.206955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.206963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.207271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.207279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.207584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.207592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.207877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.207886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.208198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.208206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.208526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.024 [2024-12-05 21:24:37.208534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.024 qpair failed and we were unable to recover it. 00:31:36.024 [2024-12-05 21:24:37.208822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.208830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.209045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.209054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.209368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.209376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.209646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.209654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.209837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.209845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.210197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.210205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.210403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.210411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.210726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.210735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.211042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.211050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.211377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.211385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.211668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.211676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.211879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.211887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.212187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.212195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.212480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.212488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.212795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.212802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.213110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.213119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.213316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.213325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.213628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.213636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.213931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.213939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.214130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.214139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.214318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.214326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.214604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.214612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.214987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.214995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.215210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.215218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.025 [2024-12-05 21:24:37.215511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.025 [2024-12-05 21:24:37.215518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.025 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.215794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.215803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.216111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.216122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.216436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.216443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.216759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.216768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.217055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.217063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.217248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.217256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.217524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.217532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.217873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.217881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.218207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.218215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.218538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.218546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.218876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.218885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.219211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.219219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.219510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.219518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.219808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.219816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.220107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.220116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.220395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.220403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.220667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.220674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.220934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.220943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.221250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.221258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.221505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.221514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.221700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.221709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.222013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.222022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.222345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.222353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.222652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.222661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.222955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.222963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.223289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.223298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.223617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.223625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.223931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.223940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.026 qpair failed and we were unable to recover it. 00:31:36.026 [2024-12-05 21:24:37.224243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.026 [2024-12-05 21:24:37.224252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.224539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.224548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.224853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.224865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.225137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.225146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.225425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.225433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.225738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.225747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.226019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.226028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.226326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.226335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.226615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.226624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.226901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.226910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.227236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.227244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.227548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.227557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.227848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.227856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.228164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.228175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.228508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.228517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.228834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.228843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.229133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.229142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.229427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.229435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.229740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.229747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.230078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.230086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.230423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.230431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.230758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.230767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.231079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.231087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.231393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.027 [2024-12-05 21:24:37.231401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.027 qpair failed and we were unable to recover it. 00:31:36.027 [2024-12-05 21:24:37.231724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.231733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.232026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.232035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.232348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.232357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.232663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.232671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.232965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.232973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.233293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.233301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.233574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.233581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.233898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.233906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.234137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.234145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.234429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.234437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.234697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.234706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.234905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.234914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.235243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.235250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.235446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.235454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.235737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.235745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.235915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.235923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.236251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.236260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.236580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.236588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.236887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.236896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.237154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.237162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.237472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.237480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.237824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.237832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.238135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.238143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.238455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.238464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.238753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.238760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.239062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.239070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.239386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.239394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.239741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.239749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.028 [2024-12-05 21:24:37.240048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.028 [2024-12-05 21:24:37.240056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.028 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.240330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.240339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.240666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.240675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.240973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.240981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.241289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.241297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.241587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.241595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.241904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.241912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.242211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.242219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.242503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.242512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.242807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.242815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.243000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.243009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.243340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.243347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.243656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.243663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.243943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.243951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.244265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.244273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.244574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.244582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.244889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.244897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.245199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.245207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.245572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.245580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.245882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.245891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.246165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.246173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.246450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.246458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.246752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.246760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.246965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.246974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.247274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.247282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.247573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.247581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.247878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.247886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.248194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.248202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.248488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.248496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.248790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.248799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.029 [2024-12-05 21:24:37.249106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.029 [2024-12-05 21:24:37.249115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.029 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.249418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.249426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.249710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.249717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.250021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.250029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.250335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.250342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.250643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.250651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.250933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.250941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.251265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.251273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.251574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.251582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.251889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.251898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.252080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.252089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.252373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.252383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.252567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.252576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.252760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.252768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.253065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.253073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.253347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.253355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.253455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.253463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.253732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.253739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.254111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.254120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.254304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.254313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.254470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.254479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.254803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.254811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.255153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.255161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.255451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.255459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.255768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.255776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.256135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.256143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.256440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.256448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.256737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.256745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.257029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.030 [2024-12-05 21:24:37.257038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.030 qpair failed and we were unable to recover it. 00:31:36.030 [2024-12-05 21:24:37.257356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.257365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.257650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.257658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.257843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.257851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.258127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.258135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.258400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.258408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.258688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.258696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.258975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.258984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.259293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.259301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.259628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.259635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.259917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.259925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.260223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.260231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.260479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.260488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.260794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.260803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.261006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.261015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.261181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.261191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.261497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.261505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.261815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.261823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.262108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.262116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.262399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.262407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.262704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.262712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.263018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.263027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.263317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.263326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.263689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.263699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.263999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.264007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.264347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.264354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.264644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.264652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.264939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.264947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.265260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.265268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.031 [2024-12-05 21:24:37.265563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.031 [2024-12-05 21:24:37.265572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.031 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.265898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.265907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.266211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.266219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.266516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.266523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.266833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.266841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.267141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.267149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.267449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.267457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.267774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.267781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.267986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.267994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.268314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.268322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.268602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.268610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.268909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.268919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.269233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.269241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.269521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.269529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.269798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.269806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.270113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.270121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.270429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.270437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.270627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.270635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.270943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.270952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.271266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.271273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.271465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.271473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.271775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.271783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.272083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.272091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.272402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.272409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.272738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.272746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.273048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.273056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.273379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.273388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.273697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.273706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.274021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.274030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.032 [2024-12-05 21:24:37.274339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.032 [2024-12-05 21:24:37.274346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.032 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.274673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.274682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.274858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.274873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.275138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.275146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.275297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.275305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.275630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.275639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.275942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.275951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.276259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.276268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.276551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.276560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.276719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.276729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.277042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.277051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.277368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.277376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.277648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.277656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.277864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.277873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.278138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.278146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.278453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.278461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.278788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.278796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.279090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.279098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.279406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.279414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.279670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.279679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.279936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.279944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.280248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.280256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.280576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.280584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.280894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.280903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.281082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.281092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.281282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.281290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.033 [2024-12-05 21:24:37.281602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.033 [2024-12-05 21:24:37.281610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.033 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.281875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.281883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.282147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.282155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.282441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.282449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.282761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.282769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.283115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.283123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.283412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.283420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.283715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.283724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.284032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.284041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.284410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.284419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.284712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.284719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.285008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.285016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.285332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.285340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.285650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.285658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.285993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.286001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.286304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.286312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.286527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.286535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.286835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.286844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.286997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.287006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.287309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.287319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.287636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.287645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.288006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.288014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.288320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.288328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.288654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.288662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.288961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.288969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.289267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.289275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.289559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.289567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.289856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.289868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.290162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.290170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.034 [2024-12-05 21:24:37.290338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.034 [2024-12-05 21:24:37.290347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.034 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.290648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.290656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.290983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.290991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.291164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.291172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.291349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.291357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.291659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.291667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.291957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.291965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.292280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.292287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.292632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.292640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.292951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.292959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.293291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.293299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.293597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.293606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.293900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.293908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.294238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.294246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.294578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.294586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.294882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.294890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.295198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.295205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.295479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.295490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.295683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.295690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.295878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.295886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.296211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.296220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.296511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.296519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.296787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.296795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.297103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.297111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.297282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.297291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.297619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.297627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.297956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.297964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.298264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.298272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.298583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.035 [2024-12-05 21:24:37.298592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.035 qpair failed and we were unable to recover it. 00:31:36.035 [2024-12-05 21:24:37.298881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.298890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.299189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.299197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.299474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.299482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.299794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.299802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.300128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.300136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.300425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.300433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.300738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.300746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.301035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.301043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.301344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.301352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.301638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.301646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.301910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.301919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.302222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.302229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.302543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.302551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.302876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.302884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.303038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.303055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.303371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.303379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.303703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.303711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.304001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.304010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.304323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.304331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.304631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.304639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.304967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.304976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.305279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.305288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.305586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.305594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.305900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.305907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.306198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.306206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.306377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.306387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.306698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.306705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.036 [2024-12-05 21:24:37.307013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.036 [2024-12-05 21:24:37.307021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.036 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.307336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.307346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.307639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.307647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.307956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.307964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.308123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.308132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.308415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.308423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.308713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.308721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.308974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.308982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.309290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.309299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.309601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.309609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.309907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.309915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.310158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.310166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.310369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.310377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.310708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.310716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.311041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.311049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.311357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.311365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.311652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.311659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.311988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.311996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.312171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.312180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.312485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.312493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.312664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.312674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.313006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.313015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.313346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.313355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.313675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.313683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2303433 Killed "${NVMF_APP[@]}" "$@" 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.314051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.314059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 [2024-12-05 21:24:37.314393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.314401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 21:24:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:36.037 [2024-12-05 21:24:37.314693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.314702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 21:24:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:36.037 [2024-12-05 21:24:37.315016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.315025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.037 qpair failed and we were unable to recover it. 00:31:36.037 21:24:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:36.037 [2024-12-05 21:24:37.315358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.037 [2024-12-05 21:24:37.315367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 21:24:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:36.038 [2024-12-05 21:24:37.315667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.315676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 21:24:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.315964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.315973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.316169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.316177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.316484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.316492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.316794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.316802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.317088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.317096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.317258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.317267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.317539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.317547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.317860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.317872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.318167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.318175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.318496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.318505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.318812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.318821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.319150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.319158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.319472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.319480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.319785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.319794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.320093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.320101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.320406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.320415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.320671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.320678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.321014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.321022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.321199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.321206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.321585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.321594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.321932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.321941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.322099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.322107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.322394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.322403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.322708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.322717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 [2024-12-05 21:24:37.323025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 [2024-12-05 21:24:37.323034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.038 qpair failed and we were unable to recover it. 00:31:36.038 21:24:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2304460 00:31:36.038 [2024-12-05 21:24:37.323358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.038 21:24:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2304460 00:31:36.039 [2024-12-05 21:24:37.323367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 [2024-12-05 21:24:37.323642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.323651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 21:24:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:36.039 21:24:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2304460 ']' 00:31:36.039 [2024-12-05 21:24:37.323826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.323836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 21:24:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.039 [2024-12-05 21:24:37.324061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.324070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 21:24:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:36.039 [2024-12-05 21:24:37.324388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.324397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 21:24:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.039 [2024-12-05 21:24:37.324703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.324713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 21:24:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:36.039 21:24:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.039 [2024-12-05 21:24:37.325015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.325025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 [2024-12-05 21:24:37.325351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.325359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 [2024-12-05 21:24:37.325656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.325665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 [2024-12-05 21:24:37.326022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.326031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 [2024-12-05 21:24:37.326231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.326239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 [2024-12-05 21:24:37.326534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.326542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 [2024-12-05 21:24:37.326806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.326814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 [2024-12-05 21:24:37.327026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.327035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 [2024-12-05 21:24:37.327226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.327235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 [2024-12-05 21:24:37.327518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.327526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 [2024-12-05 21:24:37.327865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.327874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 [2024-12-05 21:24:37.328181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.328189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 [2024-12-05 21:24:37.328346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.328355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 [2024-12-05 21:24:37.328683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.328693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.039 [2024-12-05 21:24:37.328915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.039 [2024-12-05 21:24:37.328924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.039 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.329267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.329276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.329470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.329478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.329757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.329765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.330082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.330091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.330403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.330412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.330500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.330510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.330716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.330725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.330879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.330889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.331165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.331174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.331488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.331497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.331597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.331606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.331882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.331892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.332167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.332176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.332483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.332492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.332780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.332789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.333079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.333088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.333406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.333416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.333627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.333636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.333945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.333954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.334295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.334304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.334486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.334495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.334810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.334818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.335040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.335048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.335349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.335357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.335499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.335508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.335841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.335849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.335996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.336004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.336326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.040 [2024-12-05 21:24:37.336334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.040 qpair failed and we were unable to recover it. 00:31:36.040 [2024-12-05 21:24:37.336600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.336608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.336673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.336680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.336774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.336782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.336992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.337000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.337310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.337319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.337498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.337508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.337798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.337806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.338124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.338132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.338465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.338473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.338782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.338790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.339170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.339179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.339356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.339365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.339661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.339670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.339885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.339895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.340205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.340214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.340505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.340513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.340803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.340811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.341111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.341120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.341429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.341437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.341617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.341626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.341813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.341822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.342019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.342028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.342303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.342311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.342595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.342606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.342872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.342882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.343218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.343226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.343432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.343441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.343761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.343769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.344086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.344095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.344404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.041 [2024-12-05 21:24:37.344412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.041 qpair failed and we were unable to recover it. 00:31:36.041 [2024-12-05 21:24:37.344719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.344727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.345038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.345047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.345237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.345245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.345558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.345566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.345735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.345744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.345936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.345944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.346137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.346146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.346443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.346451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.346760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.346768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.346952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.346961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.347232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.347240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.347551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.347559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.347744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.347753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.348097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.348106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.348406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.348414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.348728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.348737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.349032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.349041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.349418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.349426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.349710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.349719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.349932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.349940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.350336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.350343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.350525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.350533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.350723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.350731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.351043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.351052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.351390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.351399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.351580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.351588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.351897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.351905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.352236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.042 [2024-12-05 21:24:37.352244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.042 qpair failed and we were unable to recover it. 00:31:36.042 [2024-12-05 21:24:37.352448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.352457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.352762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.352770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.352961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.352969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.353173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.353180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.353358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.353367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.353553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.353565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.353845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.353853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.354035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.354044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.354411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.354419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.354716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.354725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.355030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.355038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.355372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.355380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.355580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.355588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.355908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.355916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.356255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.356263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.356582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.356590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.356934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.356943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.357269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.357277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.357511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.357519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.357731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.357739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.357940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.357953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.358269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.358278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.358574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.358582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.358868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.358877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.359158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.359166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.359450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.359458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.359647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.359655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.359959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.359968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.360284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.360292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.043 qpair failed and we were unable to recover it. 00:31:36.043 [2024-12-05 21:24:37.360465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.043 [2024-12-05 21:24:37.360473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.360765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.360773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.361100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.361108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.361300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.361310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.361601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.361610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.361910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.361918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.362247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.362254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.362560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.362568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.362780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.362789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.362970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.362978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.363326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.363334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.363652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.363660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.363877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.363885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.364084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.364091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.364504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.364512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.364819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.364827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.365126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.365137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.365425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.365434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.365736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.365745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.366029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.366037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.366326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.366334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.366660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.366668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.366914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.366922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.367284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.367291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.367626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.367634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.367966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.367974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.368187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.368195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.368502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.044 [2024-12-05 21:24:37.368510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.044 qpair failed and we were unable to recover it. 00:31:36.044 [2024-12-05 21:24:37.368845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.368853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.369172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.369180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.369508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.369516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.369704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.369712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.370012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.370021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.370341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.370349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.370687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.370696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.370873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.370882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.371184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.371193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.371541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.371549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.371767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.371775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.372189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.372197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.372418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.372427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.372646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.372654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.372926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.372934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.373251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.373259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.373456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.373465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.373776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.373785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.374092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.374100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.374402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.374410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.374648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.374657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.374992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.375001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.375329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.375337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.375518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.375526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.045 [2024-12-05 21:24:37.375856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.045 [2024-12-05 21:24:37.375868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.045 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.376167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.376175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.376454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.376463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.376772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.376781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.376992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.377002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.377197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.377206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.377493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.377501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.377680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.377687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.378003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.378012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.378170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.378179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.378296] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:31:36.046 [2024-12-05 21:24:37.378342] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.046 [2024-12-05 21:24:37.378378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.378385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.378697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.378705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.378899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.378907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.379128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.379135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.379448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.379457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.379636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.379645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.379976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.379987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.380325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.380334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.380652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.380660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.380980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.380989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.381319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.381328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.381510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.381519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.381830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.381839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.382144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.382153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.382439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.382448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.382734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.382742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.383033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.383042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.383374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.383382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.046 qpair failed and we were unable to recover it. 00:31:36.046 [2024-12-05 21:24:37.383673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.046 [2024-12-05 21:24:37.383681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.383968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.383977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.384169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.384178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.384486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.384496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.384782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.384790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.385015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.385025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.385299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.385308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.385610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.385618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.385906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.385916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.386087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.386096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.386423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.386432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.386741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.386750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.386908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.386917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.387240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.387249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.387552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.387560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.387751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.387760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.388263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.388273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.388607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.388616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.388789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.388798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.389140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.389149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.389447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.389456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.389634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.389642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.389852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.389865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.390194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.390202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.390500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.390509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.390848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.390857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.391212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.391220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.391526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.391534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.391607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.391615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.047 [2024-12-05 21:24:37.391896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.047 [2024-12-05 21:24:37.391904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.047 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.392234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.392242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.392559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.392567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.392867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.392875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.393064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.393072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.393362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.393370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.393710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.393719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.394059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.394068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.394376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.394384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.394689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.394697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.394960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.394968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.395280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.395288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.395629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.395637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.395952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.395960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.396022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.396029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.396403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.396411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.396754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.396762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.397088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.397097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.397310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.397319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.397652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.397660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.397959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.397967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.398168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.398176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.398509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.398517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.398677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.398686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.398982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.398991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.399330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.399339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.399530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.399538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.399869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.399877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.400241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.400249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.400590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.048 [2024-12-05 21:24:37.400599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.048 qpair failed and we were unable to recover it. 00:31:36.048 [2024-12-05 21:24:37.400914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.400922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.401265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.401273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.401616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.401624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.401791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.401799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.402091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.402099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.402402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.402410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.402608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.402616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.402796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.402804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.403141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.403149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.403454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.403464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.403750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.403758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.404082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.404090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.404397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.404405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.404678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.404687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.404867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.404876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.405044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.405053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.405220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.405229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.405279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.405288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.405620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.405627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.405818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.405826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.406137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.406145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.406448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.406456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.406772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.406780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.406978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.406986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.407260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.407268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.407606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.407615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.407972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.407980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.408280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.408288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.408625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.408634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.408811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.049 [2024-12-05 21:24:37.408819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.049 qpair failed and we were unable to recover it. 00:31:36.049 [2024-12-05 21:24:37.409105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.409114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.409412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.409420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.409534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.409542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.409750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.409759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.410092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.410101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.410406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.410414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.410703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.410711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.410905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.410913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.411200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.411209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.411524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.411532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.411689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.411699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.411993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.412001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.412231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.412239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.412520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.412528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.412818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.412826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.413156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.413165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.413370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.413379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.413645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.413654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.413980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.413988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.414314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.414323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.414499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.414508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.414787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.414796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.415109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.415117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.415419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.415428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.415741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.415750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.416068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.416077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.416396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.416406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.416708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.416717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.417031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.050 [2024-12-05 21:24:37.417039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.050 qpair failed and we were unable to recover it. 00:31:36.050 [2024-12-05 21:24:37.417350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.417359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.417652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.417661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.417898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.417918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.418245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.418253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.418565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.418575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.418797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.418806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.418980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.418990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.419310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.419320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.419588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.419598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.419934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.419943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.420255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.420264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.420439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.420448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.420640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.420650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.420934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.420942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.421201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.421209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.421511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.421519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.421712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.421720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.421992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.422002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.422330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.422338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.422513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.422522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.422830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.422838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.423169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.423178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.423473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.423482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.423798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.051 [2024-12-05 21:24:37.423806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.051 qpair failed and we were unable to recover it. 00:31:36.051 [2024-12-05 21:24:37.424098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.424107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.424249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.424258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.424598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.424607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.424910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.424918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.425252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.425261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.425589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.425598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.425938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.425947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.426316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.426325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.426530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.426538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.426741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.426750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.427016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.427025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.427326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.427334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.427601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.427611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.427900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.427908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.428203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.428211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.428476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.428485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.428783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.428792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.429135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.429143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.429451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.429459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.429510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.429518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.429756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.429764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.430080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.430089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.430419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.430427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.430734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.430743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.431048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.431056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.431387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.431396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.431738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.431746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.431929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.431938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.432115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.432123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.052 [2024-12-05 21:24:37.432436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.052 [2024-12-05 21:24:37.432444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.052 qpair failed and we were unable to recover it. 00:31:36.053 [2024-12-05 21:24:37.432804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.053 [2024-12-05 21:24:37.432812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.053 qpair failed and we were unable to recover it. 00:31:36.053 [2024-12-05 21:24:37.433124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.053 [2024-12-05 21:24:37.433133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.053 qpair failed and we were unable to recover it. 00:31:36.053 [2024-12-05 21:24:37.433443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.053 [2024-12-05 21:24:37.433452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.053 qpair failed and we were unable to recover it. 00:31:36.053 [2024-12-05 21:24:37.433785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.053 [2024-12-05 21:24:37.433795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.053 qpair failed and we were unable to recover it. 00:31:36.053 [2024-12-05 21:24:37.434087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.053 [2024-12-05 21:24:37.434095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.053 qpair failed and we were unable to recover it. 00:31:36.053 [2024-12-05 21:24:37.434410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.053 [2024-12-05 21:24:37.434417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.053 qpair failed and we were unable to recover it. 00:31:36.053 [2024-12-05 21:24:37.434795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.053 [2024-12-05 21:24:37.434802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.053 qpair failed and we were unable to recover it. 00:31:36.053 [2024-12-05 21:24:37.434959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.053 [2024-12-05 21:24:37.434967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.053 qpair failed and we were unable to recover it. 00:31:36.330 [2024-12-05 21:24:37.435279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.330 [2024-12-05 21:24:37.435289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.330 qpair failed and we were unable to recover it. 00:31:36.330 [2024-12-05 21:24:37.435625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.330 [2024-12-05 21:24:37.435633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.330 qpair failed and we were unable to recover it. 00:31:36.330 [2024-12-05 21:24:37.435823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.330 [2024-12-05 21:24:37.435831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.330 qpair failed and we were unable to recover it. 00:31:36.330 [2024-12-05 21:24:37.436119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.330 [2024-12-05 21:24:37.436128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.330 qpair failed and we were unable to recover it. 00:31:36.330 [2024-12-05 21:24:37.436320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.330 [2024-12-05 21:24:37.436329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.330 qpair failed and we were unable to recover it. 00:31:36.330 [2024-12-05 21:24:37.436648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.330 [2024-12-05 21:24:37.436656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.330 qpair failed and we were unable to recover it. 00:31:36.330 [2024-12-05 21:24:37.436960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.330 [2024-12-05 21:24:37.436968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.330 qpair failed and we were unable to recover it. 00:31:36.330 [2024-12-05 21:24:37.437136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.330 [2024-12-05 21:24:37.437144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.330 qpair failed and we were unable to recover it. 00:31:36.330 [2024-12-05 21:24:37.437431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.330 [2024-12-05 21:24:37.437439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.437747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.437755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.437929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.437937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.438021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.438030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.438291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.438299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.438612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.438621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.438775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.438784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.438963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.438971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.439253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.439261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.439579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.439588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.439898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.439906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.440242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.440250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.440547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.440556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.440867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.440876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.441192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.441200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.441533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.441542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.441830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.441838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.442128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.442136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.442332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.442341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.442638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.442647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.442833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.442843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.443041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.443049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.443372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.443381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.443711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.443720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.444030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.444039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.444199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.444207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.444515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.444522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.444846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.331 [2024-12-05 21:24:37.444856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.331 qpair failed and we were unable to recover it. 00:31:36.331 [2024-12-05 21:24:37.445208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.445217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.445538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.445546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.445731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.445740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.446037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.446046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.446352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.446360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.446670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.446678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.446982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.446992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.447180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.447187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.447497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.447506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.447664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.447673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.447834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.447842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.448149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.448158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.448456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.448465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.448652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.448660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.448983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.448991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.449304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.449312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.449473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.449480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.449765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.449773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.450096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.450104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.450404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.450412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.450750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.450758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.451074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.451082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.451428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.451436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.451737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.451746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.452075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.452083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.452245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.452252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.452446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.452454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.332 qpair failed and we were unable to recover it. 00:31:36.332 [2024-12-05 21:24:37.452768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.332 [2024-12-05 21:24:37.452777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.453109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.453118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.453430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.453438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.453828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.453836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.454014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.454024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.454341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.454349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.454659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.454668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.454980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.454989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.455180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.455188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.455361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.455370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.455684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.455692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.455871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.455880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.456184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.456194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.456493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.456501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.456812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.456821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.457127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.457135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.457316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.457324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.457665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.457673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.457866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.457875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.458181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.458189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.458523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.458531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.458712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.458720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.459037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.459045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.459387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.459395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.459715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.459723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.460032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.460040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.460375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.460383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.460573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.460582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.460854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.333 [2024-12-05 21:24:37.460865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.333 qpair failed and we were unable to recover it. 00:31:36.333 [2024-12-05 21:24:37.461160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.461168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.461361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.461370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.461694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.461702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.462023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.462031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.462219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.462227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.462532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.462540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.462844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.462852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.463027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.463036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.463347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.463355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.463675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.463683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.464009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.464017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.464327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.464335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.464629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.464637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.464950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.464958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.465273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.465281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.465633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.465641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.465968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.465976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.466371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.466379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.466686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.466694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.467004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.467012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.467060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.467068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.467260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.467268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.467590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.467598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.467793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.467804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.468082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.468090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.468446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.468454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.468775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.468783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.334 qpair failed and we were unable to recover it. 00:31:36.334 [2024-12-05 21:24:37.469088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.334 [2024-12-05 21:24:37.469097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.469402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.469411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.469731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.469739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.470054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.470063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.470384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.470392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.470565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.470572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.470840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.470848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.471161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.471169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.471461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.471469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.471753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.471761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.472097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.472105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.472269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.472277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.472468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.472476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.472788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.472796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.473105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.473114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.473424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.473433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.473727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.473736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.473889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.473898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.474226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.474234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.474429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.474438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.474757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.474766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.475076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.475084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.475396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.475405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.475735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.475744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.475914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.475922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.476260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.476268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.476579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.476588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.476927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.476936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.335 qpair failed and we were unable to recover it. 00:31:36.335 [2024-12-05 21:24:37.477254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.335 [2024-12-05 21:24:37.477262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.477596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.477605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.477882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.477891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.478195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.478203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.478504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.478513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.478834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.478842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.479130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.479139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.479457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.479465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.479579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:36.336 [2024-12-05 21:24:37.479685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.479693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.480033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.480042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.480102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.480110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.480215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.480223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.480506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.480515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.480805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.480814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.481130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.481139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.481469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.481477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.481662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.481671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.482032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.482041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.482225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.482233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.482436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.482445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.482755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.482764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.483080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.483090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.483387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.336 [2024-12-05 21:24:37.483396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.336 qpair failed and we were unable to recover it. 00:31:36.336 [2024-12-05 21:24:37.483705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.483713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.484041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.484050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.484238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.484246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.484580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.484588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.484902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.484911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.485219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.485228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.485441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.485449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.485818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.485827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.486192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.486200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.486397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.486405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.486581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.486589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.486786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.486794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.487087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.487095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.487421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.487429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.487616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.487626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.487937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.487946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.488272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.488281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.488555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.488564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.488733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.488743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.489032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.489042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.489362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.489371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.489410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.489417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.489712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.489720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.490016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.490024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.490309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.490316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.490625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.490633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.490898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.490906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.491310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.491319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.491651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.491660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.491829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.491837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.337 [2024-12-05 21:24:37.492158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.337 [2024-12-05 21:24:37.492166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.337 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.492458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.492466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.492644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.492653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.492870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.492878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.493197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.493205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.493502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.493510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.493808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.493817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.494119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.494128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.494318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.494328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.494630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.494638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.494936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.494944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.495280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.495288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.495602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.495610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.495915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.495923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.496341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.496350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.496650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.496658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.496935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.496943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.497255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.497264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.497544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.497553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.497821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.497830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.498150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.498158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.498455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.498463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.498652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.498661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.498974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.498982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.499314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.499323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.499609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.499617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.499854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.499868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.500195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.500203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.500522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.500530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.500825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.500833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.501055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.501064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.501395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.338 [2024-12-05 21:24:37.501403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.338 qpair failed and we were unable to recover it. 00:31:36.338 [2024-12-05 21:24:37.501713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.501722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.502044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.502053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.502442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.502450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.502801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.502810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.503112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.503120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.503415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.503423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.503725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.503734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.503926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.503935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.504262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.504270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.504561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.504569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.504850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.504858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.505212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.505220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.505527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.505535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.505829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.505837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.506164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.506173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.506477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.506486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.506677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.506687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.507000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.507009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.507203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.507211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.507388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.507395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.507568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.507578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.507903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.507911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.508097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.508105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.508414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.508422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.508731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.508740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.509065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.509074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.509358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.509367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.509752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.339 [2024-12-05 21:24:37.509761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.339 qpair failed and we were unable to recover it. 00:31:36.339 [2024-12-05 21:24:37.510066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.510075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.510375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.510384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.510694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.510703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.511019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.511029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.511340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.511348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.511679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.511688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.511969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.511978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.512288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.512296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.512603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.512611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.512902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.512912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.513106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.513116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.513432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.513441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.513769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.513777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.514130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.514139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.514427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.514436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.514756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.514765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.515098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.515107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.515344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.515352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.515410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.340 [2024-12-05 21:24:37.515436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.340 [2024-12-05 21:24:37.515445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.340 [2024-12-05 21:24:37.515452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.340 [2024-12-05 21:24:37.515459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.340 [2024-12-05 21:24:37.515683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.515692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.515999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.516008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.516326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.516334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.516626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.516634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.516915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.516923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.516979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:36.340 [2024-12-05 21:24:37.517117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.517125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.517188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:36.340 [2024-12-05 21:24:37.517309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:36.340 [2024-12-05 21:24:37.517405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.517413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.517310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:36.340 [2024-12-05 21:24:37.517591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.340 [2024-12-05 21:24:37.517601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.340 qpair failed and we were unable to recover it. 00:31:36.340 [2024-12-05 21:24:37.517793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.517803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.518092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.518101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.518271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.518280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.518579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.518587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.518917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.518926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.519126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.519135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.519363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.519371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.519599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.519608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.519914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.519922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.520140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.520147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.520412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.520420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.520649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.520657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.520980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.520989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.521256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.521264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.521507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.521515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.521834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.521843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.522050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.522059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.522240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.522248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.522522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.522531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.522726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.522734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.523023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.523031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.523227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.523237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.523555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.523563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.523871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.523880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.524203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.524212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.524611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.524619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.524828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.524837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.525116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.525125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.525458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.341 [2024-12-05 21:24:37.525467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.341 qpair failed and we were unable to recover it. 00:31:36.341 [2024-12-05 21:24:37.525777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.525786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.526062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.526071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.526256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.526264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.526575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.526583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.526790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.526799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.527116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.527125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.527332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.527340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.527625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.527633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.527947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.527956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.528273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.528282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.528464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.528475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.528784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.528792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.529111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.529120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.529288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.529297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.529605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.529614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.529908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.529916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.530083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.530091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.530261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.530270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.530556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.530565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.530757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.530766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.531040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.531048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.531364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.531372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.531706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.531715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.532031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.342 [2024-12-05 21:24:37.532039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.342 qpair failed and we were unable to recover it. 00:31:36.342 [2024-12-05 21:24:37.532369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.532377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.532570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.532578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.532637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.532644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.532980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.532989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.533287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.533296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.533614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.533623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.533819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.533827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.534124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.534134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.534313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.534323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.534652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.534660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.534711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.534717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.534881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.534889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.535080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.535088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.535396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.535405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.535560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.535569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.535846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.535855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.536150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.536159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.536468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.536477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.536779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.536788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.537106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.537115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.537470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.537479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.537661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.537671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.537956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.537964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.538331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.538339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.538405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.538411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.538680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.538688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.539023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.539034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.539197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.539206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.343 [2024-12-05 21:24:37.539463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.343 [2024-12-05 21:24:37.539471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.343 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.539806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.539815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.540077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.540086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.540412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.540420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.540610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.540619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.540931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.540940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.541262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.541271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.541587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.541596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.541900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.541909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.541965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.541972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.542259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.542267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.542438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.542447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.542738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.542747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.543044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.543053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.543196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.543204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.543510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.543519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.543573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.543580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.543754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.543763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.544066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.544075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.544406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.544414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.544576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.544585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.544727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.544735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.545038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.545047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.545246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.545255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.545559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.545568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.545858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.545874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.546042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.546051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.546235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.546243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.546568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.344 [2024-12-05 21:24:37.546577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.344 qpair failed and we were unable to recover it. 00:31:36.344 [2024-12-05 21:24:37.546887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.546895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.547236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.547244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.547423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.547432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.547836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.547845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.548048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.548056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.548332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.548341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.548657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.548666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.548830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.548839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.549169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.549178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.549477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.549488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.549679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.549688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.549879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.549889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.550212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.550220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.550536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.550545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.550847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.550855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.551039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.551049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.551360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.551368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.551668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.551676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.552008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.552016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.552321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.552329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.552662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.552671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.552960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.552969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.553191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.553199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.553517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.553525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.553835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.553844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.554198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.554206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.554509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.554518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.554694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.345 [2024-12-05 21:24:37.554703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.345 qpair failed and we were unable to recover it. 00:31:36.345 [2024-12-05 21:24:37.554858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.554870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.555106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.555114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.555428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.555437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.555777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.555785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.555968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.555977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.556300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.556310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.556470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.556479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.556765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.556774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.556971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.556980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.557026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.557033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.557359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.557368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.557687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.557696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.557991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.558000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.558315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.558323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.558497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.558505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.558799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.558807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.559095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.559105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.559297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.559305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.559520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.559529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.559836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.559845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.560171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.560180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.560343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.560354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.560513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.560522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.560735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.560744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.561045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.561054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.561231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.561239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.561568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.561576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.561874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.346 [2024-12-05 21:24:37.561883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.346 qpair failed and we were unable to recover it. 00:31:36.346 [2024-12-05 21:24:37.562054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.562062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.562329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.562337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.562640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.562649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.562696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.562702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.562974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.562983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.563146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.563155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.563337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.563345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.563530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.563538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.563718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.563726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.563923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.563932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.564252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.564260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.564437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.564445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.564658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.564665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.564851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.564859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.565169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.565178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.565500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.565508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.565814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.565822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.565989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.565998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.566187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.566195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.566510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.566518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.566819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.566828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.567139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.567148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.567324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.567333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.567635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.567644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.567810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.567819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.568113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.568121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.568305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.568322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.568490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.347 [2024-12-05 21:24:37.568498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.347 qpair failed and we were unable to recover it. 00:31:36.347 [2024-12-05 21:24:37.568855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.568866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.569038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.569047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.569213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.569220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.569618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.569626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.569939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.569947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.570284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.570293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.570605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.570613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.570927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.570935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.571119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.571127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.571436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.571443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.571763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.571771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.571930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.571937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.572285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.572293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.572685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.572693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.572995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.573003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.573348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.573355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.573672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.573680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.573720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.573729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.573891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.573900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.574115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.574123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.574442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.574450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.574758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.574766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.575087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.575095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.348 qpair failed and we were unable to recover it. 00:31:36.348 [2024-12-05 21:24:37.575281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.348 [2024-12-05 21:24:37.575289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.575618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.575625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.575943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.575952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.576118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.576127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.576455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.576463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.576817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.576825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.576995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.577004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.577183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.577191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.577582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.577590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.577951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.577959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.578269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.578278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.578444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.578452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.578745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.578753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.579125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.579134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.579437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.579445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.579489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.579495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.579801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.579808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.580121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.580130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.580443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.580452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.580768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.580776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.580950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.580958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.581124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.581131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.581292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.581303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.581604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.581612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.581913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.581921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.582149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.582157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.582256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.582263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.582525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.582532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.582852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.349 [2024-12-05 21:24:37.582860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.349 qpair failed and we were unable to recover it. 00:31:36.349 [2024-12-05 21:24:37.583217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.583225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.583425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.583433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.583593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.583602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.583921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.583929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.584239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.584247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.584428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.584436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.584719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.584727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.585032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.585041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.585356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.585364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.585685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.585692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.585869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.585877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.586224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.586233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.586570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.586578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.586732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.586739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.587076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.587084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.587398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.587406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.587566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.587573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.587890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.587898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.588212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.588219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.588374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.588383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.588539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.588548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.588867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.588875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.589196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.589204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.589520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.589528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.589833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.589842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.590017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.590025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.590217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.590225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.350 [2024-12-05 21:24:37.590526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.350 [2024-12-05 21:24:37.590535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.350 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.590576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.590584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.590885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.590894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.591214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.591222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.591532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.591540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.591726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.591734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.592050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.592060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.592370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.592378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.592549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.592558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.592773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.592781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.593064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.593072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.593408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.593416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.593605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.593614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.593935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.593943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.594277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.594285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.594557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.594565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.594886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.594894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.595158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.595165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.595351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.595358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.595675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.595683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.596004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.596012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.596324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.596332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.596655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.596663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.596822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.596830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.596987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.596995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.597273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.597282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.597606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.597614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.597927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.597935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.351 [2024-12-05 21:24:37.597975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.351 [2024-12-05 21:24:37.597982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.351 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.598293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.598301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.598343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.598350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.598624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.598631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.598931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.598939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.599266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.599273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.599457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.599473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.599787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.599795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.599953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.599961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.600274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.600282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.600593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.600601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.600882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.600890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.601126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.601134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.601479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.601487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.601797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.601805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.602116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.602124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.602418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.602426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.602771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.602778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.603098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.603108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.603417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.603426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.603739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.603747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.603938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.603946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.604279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.604288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.604596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.604604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.604777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.604786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.604942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.604950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.605241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.605248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.605566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.605574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.605871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.605879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.352 [2024-12-05 21:24:37.606198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.352 [2024-12-05 21:24:37.606206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.352 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.606524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.606532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.606729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.606737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.606903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.606911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.607302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.607310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.607626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.607634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.607673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.607681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.607844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.607851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.608183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.608192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.608503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.608511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.608722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.608731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.609037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.609046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.609223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.609231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.609541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.609548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.609868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.609876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.610205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.610213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.610525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.610533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.610884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.610892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.611214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.611222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.611531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.611540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.611755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.611764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.611931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.611939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.612123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.612132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.612436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.612444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.612777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.612785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.613101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.613109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.613295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.613302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.613642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.613650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.613825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.353 [2024-12-05 21:24:37.613833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.353 qpair failed and we were unable to recover it. 00:31:36.353 [2024-12-05 21:24:37.614031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.614041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.614372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.614380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.614570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.614578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.614911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.614918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.615201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.615209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.615398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.615405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.615615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.615623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.615937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.615945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.616106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.616113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.616294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.616301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.616463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.616471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.616661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.616668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.617002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.617011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.617319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.617327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.617651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.617659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.617962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.617970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.618139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.618148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.618300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.618308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.618464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.618472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.618800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.618808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.619185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.619194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.619375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.619382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.619667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.619675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.619969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.619978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.620132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.620139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.620337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.620345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.354 qpair failed and we were unable to recover it. 00:31:36.354 [2024-12-05 21:24:37.620686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.354 [2024-12-05 21:24:37.620695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.620997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.621007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.621317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.621325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.621483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.621491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.621644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.621653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.621946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.621954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.622284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.622292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.622605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.622613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.622788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.622796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.622974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.622983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.623197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.623206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.623372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.623380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.623592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.623601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.623941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.623949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.624128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.624135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.624474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.624482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.624797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.624805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.624844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.624850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.625175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.625183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.625395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.625403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.625584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.625592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.625917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.625926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.626236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.626243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.626559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.626567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.626884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.626892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.627209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.627217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.627574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.355 [2024-12-05 21:24:37.627582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.355 qpair failed and we were unable to recover it. 00:31:36.355 [2024-12-05 21:24:37.627783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.627790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.628124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.628132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.628417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.628425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.628755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.628763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.629133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.629141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.629457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.629465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.629661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.629670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.629980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.629989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.630154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.630163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.630338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.630346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.630671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.630679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.630866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.630874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.631174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.631182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.631493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.631501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.631809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.631818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.632003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.632012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.632199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.632207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.632373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.632380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.632486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.632493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.632794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.632802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.632995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.633004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.633317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.633324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.633633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.633641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.633815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.633823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.633867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.633874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.634057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.634065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.634247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.634256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.356 qpair failed and we were unable to recover it. 00:31:36.356 [2024-12-05 21:24:37.634574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.356 [2024-12-05 21:24:37.634582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.634909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.634918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.635243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.635251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.635564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.635571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.635890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.635898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.636077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.636086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.636297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.636305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.636473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.636481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.636746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.636754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.637092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.637100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.637432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.637440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.637752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.637760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.638074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.638083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.638412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.638420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.638731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.638739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.638917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.638926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.639107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.639116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.639440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.639448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.639752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.639761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.640086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.640094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.640134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.640140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.640462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.640469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.640639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.640647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.640948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.640957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.641280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.641288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.641606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.641614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.641933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.641942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.642250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.642259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.642593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.357 [2024-12-05 21:24:37.642601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.357 qpair failed and we were unable to recover it. 00:31:36.357 [2024-12-05 21:24:37.642912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.642921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.643142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.643149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.643486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.643495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.643671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.643679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.643858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.643869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.644164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.644172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.644507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.644515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.644823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.644830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.645103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.645111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.645399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.645407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.645738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.645747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.646067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.646076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.646400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.646409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.646723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.646730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.647029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.647038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.647358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.647366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.647681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.647690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.647873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.647882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.648171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.648179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.648495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.648503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.648712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.648720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.648897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.648904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.649233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.649242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.649424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.649433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.649780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.649788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.650079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.650088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.650246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.650254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.650560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.358 [2024-12-05 21:24:37.650568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.358 qpair failed and we were unable to recover it. 00:31:36.358 [2024-12-05 21:24:37.650880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.650888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.651061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.651068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.651254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.651261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.651425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.651433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.651605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.651613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.651891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.651900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.652208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.652215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.652492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.652500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.652789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.652798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.653115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.653123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.653309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.653318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.653602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.653610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.653930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.653938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.654256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.654264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.654455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.654462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.654638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.654646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.654964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.654972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.655285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.655293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.655455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.655464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.655674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.655682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.655998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.656007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.656315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.656324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.656619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.656627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.656803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.656811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.656987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.656995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.657290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.657298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.657611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.657619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.657933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.657942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.359 qpair failed and we were unable to recover it. 00:31:36.359 [2024-12-05 21:24:37.658109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.359 [2024-12-05 21:24:37.658118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.658330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.658338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.658623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.658631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.658944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.658952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.659155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.659164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.659351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.659358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.659643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.659651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.659975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.659983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.660301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.660309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.660485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.660495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.660876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.660884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.661176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.661184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.661463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.661471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.661782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.661790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.662106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.662115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.662303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.662312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.662631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.662639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.662951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.662959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.663275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.663283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.663634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.663641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.663947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.663955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.664258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.664266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.664595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.664604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.664917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.664926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.665253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.665261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.665564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.665572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.665870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.665878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.666164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.666172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.666349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.666358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.666683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.360 [2024-12-05 21:24:37.666692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.360 qpair failed and we were unable to recover it. 00:31:36.360 [2024-12-05 21:24:37.666880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.666888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.667164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.667172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.667483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.667491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.667644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.667651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.667963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.667971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.668326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.668333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.668717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.668724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.668906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.668914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.669082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.669090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.669359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.669367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.669679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.669687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.670003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.670010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.670321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.670329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.670629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.670637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.670949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.670958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.671252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.671261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.671591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.671599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.671991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.671999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.672169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.672176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.672444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.672453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.672758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.672766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.672949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.672958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.673236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.673243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.673445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.673454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.673778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.673786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.361 [2024-12-05 21:24:37.673937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.361 [2024-12-05 21:24:37.673945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.361 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.674247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.674255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.674479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.674488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.674650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.674658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.674850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.674859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.675180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.675188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.675504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.675512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.675705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.675716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.675903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.675911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.676075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.676082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.676258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.676266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.676442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.676450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.676796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.676804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.677031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.677040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.677366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.677374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.677712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.677719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.677922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.677930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.678211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.678219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.678571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.678579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.678769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.678778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.679107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.679115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.679435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.679443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.679766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.679774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.680100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.680108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.680295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.680303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.680470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.680477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.680658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.680666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.680864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.680872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.681168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.681176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.681356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.681364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.681647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.681655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.681970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.362 [2024-12-05 21:24:37.681980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.362 qpair failed and we were unable to recover it. 00:31:36.362 [2024-12-05 21:24:37.682333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.682342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.682492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.682499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.682773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.682781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.682957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.682964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.683283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.683290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.683473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.683480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.683827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.683835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.684174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.684182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.684358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.684365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.684687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.684694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.685001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.685010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.685167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.685175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.685468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.685476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.685826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.685834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.686163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.686171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.686353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.686362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.686559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.686567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.686749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.686758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.687044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.687052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.687245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.687254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.687429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.687436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.687580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.687587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.687873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.687881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.688156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.688164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.688507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.688515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.688697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.688705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.689030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.689038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.689403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.689411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.689721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.689729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.363 [2024-12-05 21:24:37.690029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.363 [2024-12-05 21:24:37.690037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.363 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.690205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.690212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.690524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.690532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.690722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.690729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.690902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.690910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.691213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.691221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.691545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.691552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.691870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.691878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.692186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.692194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.692499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.692507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.692848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.692856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.693172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.693180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.693490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.693498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.693806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.693814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.694128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.694137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.694458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.694467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.694506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.694514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.694684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.694693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.695031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.695040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.695407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.695415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.695721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.695729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.696031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.696039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.696372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.696379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.696557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.696565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.696733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.696741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.697024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.697032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.697342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.697353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.697737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.697745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.698051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.698060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.698390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.698398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.698687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.364 [2024-12-05 21:24:37.698695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.364 qpair failed and we were unable to recover it. 00:31:36.364 [2024-12-05 21:24:37.699026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.699034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.699361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.699369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.699550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.699557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.699885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.699893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.700232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.700240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.700403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.700411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.700591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.700598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.700758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.700765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.701062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.701070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.701474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.701481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.701664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.701672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.701882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.701891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.702177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.702185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.702384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.702392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.702665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.702673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.702867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.702876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.703058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.703066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.703395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.703403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.703794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.703802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.704105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.704113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.704445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.704452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.704771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.704779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.705094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.705102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.705405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.705413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.705567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.705575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.705887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.705896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.706217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.706224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.706535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.706543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.706838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.706846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.707146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.707154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.707466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.365 [2024-12-05 21:24:37.707474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.365 qpair failed and we were unable to recover it. 00:31:36.365 [2024-12-05 21:24:37.707782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.707789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.707983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.707992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.708312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.708320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.708636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.708644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.708933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.708943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.709275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.709283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.709469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.709478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.709842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.709850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.710159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.710168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.710465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.710473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.710787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.710795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.711109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.711118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.711271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.711280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.711599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.711608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.711929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.711937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.712160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.712169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.712342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.712349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.712656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.712664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.712977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.712986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.713307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.713315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.713628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.713636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.713936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.713944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.714273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.714281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.714610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.714617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.714738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.366 [2024-12-05 21:24:37.714745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.366 qpair failed and we were unable to recover it. 00:31:36.366 [2024-12-05 21:24:37.715024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.715032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.715353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.715360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.715671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.715679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.715870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.715878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.716045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.716052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.716339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.716347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.716506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.716514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.716826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.716833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.717154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.717162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.717494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.717502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.717822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.717831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.718017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.718026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.718193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.718202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.718519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.718527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.718865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.718874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.719169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.719178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.719364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.719372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.719704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.719712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.720032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.720040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.720213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.720222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.720557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.720565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.720748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.720756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.720940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.720948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.721223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.721231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.721534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.721542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.721874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.721883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.722205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.722213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.722385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.722393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.722652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.722660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.722831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.722839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.723005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.367 [2024-12-05 21:24:37.723013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.367 qpair failed and we were unable to recover it. 00:31:36.367 [2024-12-05 21:24:37.723293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.723301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.723609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.723617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.723913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.723922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.724105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.724113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.724448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.724455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.724793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.724800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.725206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.725215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.725380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.725387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.725540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.725548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.725874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.725883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.726171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.726178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.726532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.726540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.726720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.726728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.726889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.726897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.727252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.727260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.727433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.727441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.727611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.727619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.727659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.727665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.727831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.727840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.728139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.728148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.728462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.728470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.728647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.728654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.729023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.729031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.729349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.729356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.729656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.729664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.730013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.730021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.730331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.730339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.730711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.730719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.730844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.730853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.731390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.731487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.368 [2024-12-05 21:24:37.731932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.368 [2024-12-05 21:24:37.731984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:31:36.368 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.732240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.732272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.732482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.732512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30f8000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.732808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.732817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.733252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.733285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.733609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.733619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.733804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.733812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.734093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.734101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.734420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.734428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.734740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.734749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.734909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.734917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.735221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.735229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.735548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.735557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.735722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.735730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.735929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.735938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.736231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.736239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.736577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.736585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.736896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.736905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.737208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.737216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.737516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.737524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.737864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.737873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.738064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.738073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.738399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.738407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.738570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.738578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.738721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.738729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.739011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.739019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.739331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.739338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.739679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.739687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.739908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.739918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.740102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.740109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.369 [2024-12-05 21:24:37.740403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.369 [2024-12-05 21:24:37.740411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.369 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.740756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.740764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.740950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.740959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.741269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.741277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.741434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.741441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.741737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.741745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.741930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.741937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.742117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.742125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.742506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.742516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.742712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.742720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.743109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.743118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.743419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.743427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.743589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.743598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.743895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.743903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.744101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.744109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.744401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.744409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.744448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.744454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.744776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.744783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.744946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.744954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.745356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.745364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.745526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.745533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.745850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.745858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.746175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.746183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.746492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.746500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.746815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.746822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.747138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.747146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.747459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.747466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.747622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.747631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.747819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.747827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.748122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.748130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.370 [2024-12-05 21:24:37.748470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.370 [2024-12-05 21:24:37.748478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.370 qpair failed and we were unable to recover it. 00:31:36.371 [2024-12-05 21:24:37.748768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.371 [2024-12-05 21:24:37.748776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.371 qpair failed and we were unable to recover it. 00:31:36.371 [2024-12-05 21:24:37.748982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.371 [2024-12-05 21:24:37.748991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.371 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.749324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.749333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.749647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.749655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.749971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.749979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.750288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.750296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.750470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.750479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.750517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.750526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.750681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.750689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.750973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.750981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.751296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.751304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.751469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.751476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.751787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.751795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.751993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.752001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.752312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.752319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.752663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.752672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.752972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.752981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.753321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.656 [2024-12-05 21:24:37.753330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.656 qpair failed and we were unable to recover it. 00:31:36.656 [2024-12-05 21:24:37.753521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.753530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.753846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.753854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.754167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.754176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.754485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.754494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.754654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.754663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.754702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.754710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.754883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.754892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.755059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.755067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.755247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.755256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.755603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.755611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.755789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.755798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.755954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.755963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.756264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.756272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.756600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.756609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.756938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.756947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.757261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.757270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.757573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.757581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.757884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.757893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.758233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.758241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.758564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.758572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.758750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.758759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.759150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.759158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.759354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.759361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.759681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.759688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.759842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.759859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.760204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.760212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.760340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.760347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.760665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.760672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.760844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.760851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.761169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.761178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.761404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.761412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.761721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.761729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.761887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.761895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.762118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.762126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.762289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.762297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.762619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.762626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.762935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.762943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.763259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.763267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.763432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.763439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.763733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.763740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.764057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.764065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.764431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.764439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.764737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.764745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.765039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.765047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.765368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.765375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.765682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.765689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.766026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.766034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.766210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.766217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.766529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.766537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.766854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.766865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.767174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.767183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.767221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.767230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.767494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.767502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.767812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.767820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.768119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.768127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.768415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.768423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.768612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.768621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.768931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.768939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.769253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.657 [2024-12-05 21:24:37.769261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.657 qpair failed and we were unable to recover it. 00:31:36.657 [2024-12-05 21:24:37.769420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.769429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.769746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.769754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.770030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.770038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.770343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.770351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.770683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.770691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.770866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.770874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.771063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.771071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.771441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.771451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.771803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.771811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.772181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.772190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.772366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.772375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.772553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.772562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.772745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.772754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.772956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.772965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.773350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.773358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.773673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.773681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.773985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.773994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.774165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.774172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.774330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.774338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.774650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.774658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.774974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.774982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.775155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.775164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.775478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.775487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.775667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.775675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.776005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.776013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.776314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.776321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.776647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.776655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.776970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.776978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.777193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.777200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.777543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.777550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.777885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.777893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.778209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.778217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.778528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.778536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.778692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.778700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.779009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.779018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.779213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.779222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.779537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.779544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.779731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.779739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.780015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.780023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.780201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.780209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.780492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.780500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.780833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.780841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.781117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.781125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.781302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.781311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.781612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.781620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.781786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.781793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.782110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.782119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.782426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.782436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.782785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.782793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.783098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.783106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.783415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.783423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.783743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.783752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.784029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.784037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.784343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.784351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.784526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.784534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.784825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.784833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.785159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.785167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.785464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.785472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.785774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.658 [2024-12-05 21:24:37.785782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.658 qpair failed and we were unable to recover it. 00:31:36.658 [2024-12-05 21:24:37.786088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.786096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.786255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.786261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.786616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.786624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.786659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.786666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.786979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.786987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.787325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.787333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.787521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.787529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.787852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.787860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.788193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.788201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.788512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.788520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.788697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.788705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.789006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.789014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.789344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.789352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.789663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.789671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.790024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.790033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.790215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.790223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.790377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.790385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.790687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.790695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.791008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.791017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.791200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.791209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.791359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.791367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.791671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.791680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.791842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.791850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.792162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.792170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.792471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.792480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.792811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.792820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.793017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.793025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.793188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.793195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.793355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.793365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.793568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.793575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.793875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.793884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.794159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.794166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.794479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.794487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.794668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.794676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.794899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.794907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.795184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.795192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.795511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.795519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.795831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.795839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.796112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.796120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.796452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.796460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.796498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.796507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.796789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.796797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.797013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.797021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.797278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.797286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.797598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.797606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.797785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.797792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.798086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.798094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.798387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.798394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.798711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.798719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.798905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.798913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.799184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.799192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.799366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.659 [2024-12-05 21:24:37.799374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.659 qpair failed and we were unable to recover it. 00:31:36.659 [2024-12-05 21:24:37.799688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.799696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.800006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.800014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.800323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.800331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.800642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.800650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.800978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.800986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.801292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.801300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.801628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.801637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.801932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.801940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.802308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.802316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.802627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.802635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.802948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.802956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.803274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.803282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.803439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.803447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.803652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.803659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.803928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.803936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.804266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.804274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.804567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.804576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.804759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.804767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.804941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.804948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.805173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.805181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.805489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.805497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.805813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.805821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.806088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.806097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.806413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.806421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.806740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.806749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.806926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.806934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.807217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.807225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.807566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.807574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.807885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.807893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.808219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.808227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.808476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.808484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.808794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.808802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.809117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.809126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.809439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.809447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.809763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.809770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.810081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.810089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.810440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.810448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.810761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.810768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.810926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.810934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.811212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.811220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.811556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.811564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.811759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.811768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.812064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.812073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.812241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.812250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.812634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.812643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.813029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.813038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.813352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.813359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.813660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.813668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.813831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.813838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.814147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.814155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.814342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.814351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.814645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.814653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.814956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.814964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.815283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.815291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.815462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.815470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.815776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.815784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.816100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.816110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.816295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.816302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.816627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.816634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.816926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.816934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.817110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.660 [2024-12-05 21:24:37.817118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.660 qpair failed and we were unable to recover it. 00:31:36.660 [2024-12-05 21:24:37.817422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.817430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.817608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.817618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.817962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.817971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.818285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.818293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.818453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.818462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.818731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.818738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.819053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.819061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.819233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.819240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.819568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.819575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.819900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.819908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.820231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.820239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.820393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.820401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.820574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.820581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.820959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.820969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.821277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.821285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.821463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.821471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.821766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.821773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.822093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.822101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.822411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.822418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.822452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.822458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.822785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.822793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.823106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.823114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.823155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.823161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.823561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.823568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.823611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.823617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.823780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.823787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.824062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.824070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.824383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.824391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.824705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.824713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.825023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.825031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.825209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.825216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.825408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.825417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.825724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.825733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.826065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.826074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.826278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.826286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.826596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.826605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.826787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.826795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.827093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.827101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.827407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.827415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.827567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.827575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.827875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.827883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.828147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.828155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.828487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.828495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.828657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.828665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.828911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.828919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.829077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.829085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.829281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.829289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.829599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.829608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.829928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.829936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.830270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.830278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.830498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.830506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.830683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.830691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.831017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.831026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.831332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.831340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.831378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.831387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.831691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.831699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.832016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.832024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.832329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.832337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.832520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.832528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.832801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.832810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.661 [2024-12-05 21:24:37.833126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.661 [2024-12-05 21:24:37.833135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.661 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.833438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.833446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.833743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.833751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.833911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.833919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.834219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.834227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.834397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.834405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.834710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.834719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.835016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.835024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.835188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.835196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.835512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.835520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.835710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.835718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.835753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.835761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.835957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.835965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.836321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.836329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.836645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.836653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.836981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.836991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.837328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.837336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.837650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.837658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.837813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.837820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.838113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.838121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.838420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.838429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.838746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.838755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.839061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.839069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.839385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.839393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.839588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.839596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.839778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.839786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.840108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.840116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.840287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.840295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.840598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.840606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.840818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.840827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.841157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.841165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.841378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.841387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.841675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.841684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.841988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.841996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.842034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.842040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.842352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.842360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.842677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.842685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.842870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.842878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.843064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.843072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.843253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.843261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.843523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.843531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.843882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.843890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.844069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.844077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.844368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.844376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.844692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.844701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.844875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.844884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.845176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.845184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.845340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.845349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.845632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.845640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.845679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.845687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.845993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.846002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.846188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.846197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.846547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.846555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.846892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.846901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.847233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.847241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.847417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.662 [2024-12-05 21:24:37.847428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.662 qpair failed and we were unable to recover it. 00:31:36.662 [2024-12-05 21:24:37.847612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.847620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.847700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.847709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.848021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.848030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.848362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.848371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.848664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.848673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.848979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.848987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.849295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.849303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.849610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.849618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.849794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.849801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.850121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.850129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.850454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.850461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.850786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.850794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.850991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.851000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.851302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.851310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.851630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.851637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.851963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.851971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.852300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.852307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.852623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.852631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.852811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.852819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.853220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.853229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.853526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.853535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.853853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.853861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.853902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.853909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.854037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.854044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.854321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.854330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.854659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.854668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.855032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.855041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.855352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.855359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.855716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.855724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.856039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.856047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.856364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.856372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.856560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.856568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.856899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.856907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.857062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.857070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.857375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.857383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.857697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.857705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.858022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.858031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.858363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.858372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.858562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.858570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.858897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.858907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.859216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.859224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.859532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.859540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.859708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.859716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.860023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.860031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.860340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.860349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.860680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.860688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.861003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.861011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.861315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.861323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.861512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.861521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.861693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.861702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.861946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.861954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.862285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.862293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.862629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.862636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.862941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.862949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.863281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.863289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.863502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.863510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.863817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.863824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.863977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.863985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.864179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.864187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.864364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.663 [2024-12-05 21:24:37.864372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.663 qpair failed and we were unable to recover it. 00:31:36.663 [2024-12-05 21:24:37.864696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.864703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.865045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.865054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.865351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.865359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.865682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.865689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.865879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.865887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.865951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.865959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.866137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.866144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.866456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.866464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.866620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.866627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.866784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.866791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.867179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.867187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.867510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.867518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.867834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.867843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.868132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.868140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.868175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.868182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.868459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.868467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.868652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.868661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.868830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.868838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.869182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.869190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.869568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.869579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.869883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.869892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.869926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.869935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.870259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.870267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.870584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.870592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.870902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.870910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.871245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.871253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.871546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.871554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.871870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.871878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.872038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.872046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.872202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.872210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.872396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.872404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.872782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.872790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.872983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.872990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.873317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.873325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.873505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.873514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.873813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.873820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.874131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.874139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.874313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.874320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.874654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.874662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.874839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.874847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.875020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.875029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.875341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.875349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.875682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.875690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.876003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.876012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.876295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.876303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.876612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.876620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.876799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.876808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.877142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.877150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.877342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.877351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.877548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.877556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.877726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.877733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.878027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.878035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.878349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.878357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.878706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.878713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.879025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.879034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.879360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.879368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.879698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.879705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.879964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.879972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.880010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.880017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.880339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.664 [2024-12-05 21:24:37.880348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.664 qpair failed and we were unable to recover it. 00:31:36.664 [2024-12-05 21:24:37.880658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.880666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.880839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.880847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.881040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.881048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.881228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.881236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.881550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.881557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.881712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.881720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.881910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.881918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.882206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.882214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.882540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.882548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.882831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.882839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.883025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.883034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.883366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.883375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.883414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.883423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.883606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.883615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.883949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.883958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.884276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.884284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.884608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.884616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.884771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.884778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.884918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.884926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.885107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.885115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.885425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.885433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.885752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.885760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.886098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.886107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.886419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.886427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.886738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.886746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.886904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.886912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.887188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.887196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.887373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.887381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.887649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.887657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.887839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.887846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.888191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.888200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.888363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.888371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.888559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.888566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.888880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.888888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.889078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.889087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.889408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.889416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.889597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.889605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.889923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.889930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.890225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.890232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.890387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.890395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.890708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.890716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.891031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.891039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.891377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.891384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.891700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.891708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.891908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.891916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.892199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.892206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.892500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.892507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.892827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.892836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.893111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.893120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.893452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.893460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.893503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.893510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.893708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.893716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.894030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.894039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.894364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.894372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.665 [2024-12-05 21:24:37.894535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.665 [2024-12-05 21:24:37.894543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.665 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.894888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.894896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.895060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.895067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.895220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.895228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.895562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.895569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.895730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.895738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.895963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.895971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.896129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.896136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.896461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.896469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.896634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.896642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.896927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.896936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.897260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.897268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.897544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.897553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.897784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.897793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.898100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.898108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.898297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.898305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.898464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.898472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.898776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.898783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.899098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.899106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.899417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.899425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.899741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.899748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.900064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.900073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.900383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.900391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.900725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.900734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.901006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.901015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.901204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.901213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.901532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.901540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.901712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.901720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.901894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.901901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.902189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.902197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.902377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.902384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.902558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.902566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.902882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.902890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.903068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.903077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.903438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.903446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.903753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.903760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.903921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.903929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f30fc000b90 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.904122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6f030 is same with the state(6) to be set 00:31:36.666 [2024-12-05 21:24:37.904616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.904656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.904997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.905017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.905423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.905462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.905680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.905693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.906037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.906050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.906240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.906252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.906548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.906559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.906789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.906800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.907130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.907142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.907485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.907496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.907826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.907837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.908156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.908168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.908492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.908503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.908807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.908818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.909124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.909136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.909329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.909341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.909522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.909532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.909855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.909877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.910161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.910172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.910355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.910366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.910701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.910712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.911038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.911049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.911360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.911372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.666 qpair failed and we were unable to recover it. 00:31:36.666 [2024-12-05 21:24:37.911736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.666 [2024-12-05 21:24:37.911747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.912077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.912088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.912279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.912290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.912611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.912622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.912802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.912813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.912883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.912894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.913259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.913270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.913575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.913587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.913860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.913876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.914179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.914190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.914370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.914380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.914439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.914451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.914629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.914640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.914970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.914981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.915246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.915256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.915443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.915454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.915784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.915795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.916152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.916163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.916452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.916462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.916635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.916652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.916985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.916996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.917349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.917359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.917670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.917682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.917961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.917972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.918283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.918294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.918601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.918612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.918929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.918940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.919133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.919144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.919472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.919482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.919792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.919803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.920106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.920116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.920280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.920291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.920497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.920508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.920801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.920811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.921138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.921149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.921417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.921428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.921762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.921773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.922141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.922154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.922491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.922502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.922679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.922689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.923011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.923022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.923071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.923080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.923374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.923385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.923722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.923733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.924115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.924126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.924423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.924434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.924753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.924766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.925097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.925109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.925288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.925298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.925469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.925479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.925524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.925535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.925903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.925914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.926247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.926258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.926546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.926557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.926749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.926760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.927082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.927093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.927439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.927450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.927787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.927798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.928109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.928120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.928411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.928422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.667 [2024-12-05 21:24:37.928585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.667 [2024-12-05 21:24:37.928597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.667 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.928926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.928939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.929249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.929259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.929571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.929582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.929768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.929780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.930078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.930089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.930406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.930417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.930729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.930740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.931070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.931081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.931400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.931411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.931726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.931736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.932110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.932121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.932436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.932447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.932598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.932608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.932931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.932943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.933269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.933279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.933444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.933455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.933634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.933645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.933952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.933964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.934300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.934310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.934497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.934508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.934685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.934696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.934952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.934963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.935219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.935230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.935543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.935554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.935858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.935873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.936064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.936075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.936396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.936409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.936459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.936470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.936720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.936731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.937076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.937087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.937367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.937378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.937686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.937697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.937849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.937879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.938201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.938213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.938528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.938538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.938848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.938858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.939044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.939056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.939109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.939120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.939427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.939437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.939636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.939647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.939983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.939993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.940302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.940313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.940624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.940635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.940975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.940986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.941314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.941325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.941636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.941646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.941944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.941955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.942273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.942285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.942465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.942477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.942780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.942790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.943084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.943095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.943278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.943290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.943578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.943589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.943899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.943913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.668 qpair failed and we were unable to recover it. 00:31:36.668 [2024-12-05 21:24:37.944237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.668 [2024-12-05 21:24:37.944248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.944587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.944598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.944889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.944901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.945203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.945213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.945344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.945354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.945639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.945650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.945820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.945831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.946138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.946149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.946465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.946475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.946785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.946796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.947117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.947128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.947449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.947460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.947616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.947626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.947966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.947978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.948283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.948294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.948493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.948504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.948822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.948833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.949018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.949030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.949219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.949230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.949533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.949544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.949820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.949831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.950168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.950179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.950525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.950536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.950758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.950768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.950956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.950968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.951365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.951376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.951547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.951559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.951746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.951757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.952064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.952076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.952390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.952401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.952708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.952719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.953001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.953013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.953336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.953347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.953655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.953665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.953954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.953965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.954153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.954165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.954447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.954458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.954830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.954841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.955151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.955162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.955325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.955336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.955516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.955530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.955867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.955878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.956155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.956165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.956505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.956515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.956691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.956703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.957019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.957031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.957367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.957377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.957689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.957700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.958015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.958026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.958209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.958219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.958542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.958552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.958868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.958879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.959214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.959224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.959523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.959534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.959712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.959724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.959909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.959921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.959967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.959978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.960248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.960259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.960411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.960421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.960590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.960601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.960899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.669 [2024-12-05 21:24:37.960910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.669 qpair failed and we were unable to recover it. 00:31:36.669 [2024-12-05 21:24:37.961092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.961103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.961424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.961434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.961725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.961736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.962078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.962089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.962129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.962138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.962449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.962460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.962797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.962810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.962989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.963001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.963295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.963305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.963646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.963657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.963848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.963858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.964141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.964152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.964343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.964354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.964665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.964675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.964991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.965002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.965321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.965332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.965644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.965656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.965837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.965848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.966157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.966169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.966385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.966395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.966725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.966737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.967073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.967085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.967249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.967260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.967588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.967598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.967971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.967982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.968297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.968308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.968620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.968631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.968974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.968986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.969180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.969191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.969542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.969553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.969934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.969945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.970124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.970136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.970475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.970485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.970775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.970786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.971064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.971075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.971396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.971407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.971740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.971751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.972029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.972040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.972358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.972369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.972702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.972712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.973018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.973029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.973207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.973218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.973396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.973407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.973712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.973724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.974033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.974044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.974344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.974355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.974624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.974635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.974952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.974968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.975152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.975164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.975450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.975460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.975776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.975787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.975985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.975997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.976376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.976386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.976564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.976576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.976893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.976904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.977055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.977065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.977254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.977264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.977566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.977577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.977626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.977635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.977898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.977909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.670 [2024-12-05 21:24:37.978213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.670 [2024-12-05 21:24:37.978223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.670 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.978411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.978424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.978598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.978609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.978936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.978947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.979268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.979279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.979588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.979600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.979914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.979926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.980245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.980256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.980410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.980421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.980763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.980773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.981068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.981079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.981168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.981178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.981409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.981420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.981608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.981620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.981793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.981804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.981996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.982007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.982311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.982321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.982638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.982649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.982962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.982973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.983264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.983274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.983588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.983599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.983905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.983917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.984255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.984266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.984453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.984464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.984778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.984788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.985108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.985119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.985423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.985434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.985600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.985610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.985923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.985934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.986346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.986357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.986743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.986754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.986936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.986948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.987110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.987121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.987429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.987440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.987727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.987739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.988082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.988094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.988410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.988421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.988759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.988771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.988973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.988984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.989285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.989296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.989625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.989637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.990026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.990038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.990228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.990239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.990546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.990557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.990883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.990896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.991234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.991245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.991432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.991443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.991811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.991822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.992182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.992194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.992514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.992525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.992859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.992875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.993224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.993235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.993573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.993584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.993760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.993772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.994061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.994073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.994408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.994423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.994757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.994768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.671 qpair failed and we were unable to recover it. 00:31:36.671 [2024-12-05 21:24:37.995089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.671 [2024-12-05 21:24:37.995100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.995281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.995292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.995632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.995642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.995816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.995827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.996111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.996122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.996308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.996318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.996513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.996524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.996851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.996866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.997170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.997181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.997414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.997426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.997626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.997637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.997961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.997971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.998147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.998158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.998495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.998506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.998771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.998781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.999083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.999095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.999434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.999445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.999755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.999766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:37.999961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:37.999972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.000283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.000294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.000643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.000654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.000974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.000986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.001280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.001290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.001443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.001457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.001802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.001813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.002197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.002209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.002515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.002526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.002826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.002837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.003181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.003193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.003515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.003526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.003798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.003810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.003970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.003981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.004277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.004288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.004603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.004614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.005028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.005040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.005081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.005090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.005385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.005397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.005702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.005714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.006018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.006030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.006362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.006374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.006701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.006712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.007022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.007033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.007338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.007349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.007529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.007540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.007866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.007878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.008044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.008056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.008220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.008231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.008533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.008544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.008858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.008874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.009070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.009082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.009256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.009267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.009586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.009598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.009914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.009926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.010106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.010117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.010440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.010450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.010637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.010648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.010820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.010832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.011130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.011141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.011455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.011466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.011759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.011769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.672 [2024-12-05 21:24:38.012121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.672 [2024-12-05 21:24:38.012132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.672 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.012466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.012477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.012781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.012792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.013113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.013125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.013432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.013445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.013630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.013643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.013977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.013992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.014311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.014323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.014500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.014511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.014700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.014711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.014898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.014909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.015099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.015110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.015281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.015293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.015604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.015616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.015949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.015961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.016133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.016145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.016466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.016477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.016821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.016832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.017148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.017159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.017465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.017476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.017640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.017651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.017942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.017954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.018274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.018285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.018566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.018577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.018617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.018626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.018898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.018909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.019255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.019266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.019484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.019496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.019691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.019702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.020017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.020028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.020240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.020252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.020647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.020658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.020836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.020847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.021101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.021112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.021420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.021432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.021794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.021805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.022110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.022122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.022482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.022493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.022802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.022813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.022991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.023003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.023273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.023283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.023598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.023609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.023795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.023806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.024111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.024123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.024285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.024296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.024499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.024510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.024808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.024819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.025133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.025147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.025326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.025337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.025663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.025674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.025859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.025880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.026217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.026228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.026587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.026598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.026907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.026918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.027221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.027232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.027550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.027561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.027816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.027827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.028137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.028148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.028467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.028479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.028792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.673 [2024-12-05 21:24:38.028802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.673 qpair failed and we were unable to recover it. 00:31:36.673 [2024-12-05 21:24:38.029181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.029192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.029497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.029508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.029871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.029882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.030209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.030220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.030554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.030565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.030738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.030750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.030933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.030944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.031121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.031132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.031410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.031420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.031757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.031768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.032035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.032047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.032425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.032436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.032740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.032751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.033034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.033045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.033347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.033362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.033647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.033660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.033971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.033983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.034174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.034185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.034518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.034529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.034844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.034855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.035196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.035208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.035384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.035398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.035448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.035460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.035814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.035824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.036149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.036161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.036497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.036509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.036819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.036830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.037146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.037157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.037354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.037366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.037660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.037670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.037971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.037982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.038156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.038168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.038437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.038449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.038575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.038587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.038952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.038964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.039261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.039272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.039435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.039446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.039802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.039813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.040121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.040132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.040437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.040448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.040795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.040806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.040980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.040991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.041263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.041277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.041617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.041629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.041932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.041944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.042347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.042358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.042693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.042704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.042920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.042933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.043261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.043273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.043562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.043574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.043749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.043762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.044056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.044068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.044389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.044402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.044749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.044761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.045089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.045101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.674 qpair failed and we were unable to recover it. 00:31:36.674 [2024-12-05 21:24:38.045283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.674 [2024-12-05 21:24:38.045297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.045613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.045624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.045926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.045937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.046240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.046252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.046559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.046570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.046879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.046890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.047208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.047220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.047468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.047479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.047529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.047539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.047819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.047832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.048168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.048180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.048502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.048514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.048812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.048823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.049158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.049169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.049504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.049515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.049826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.049838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.050158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.050169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.050351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.050361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.050698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.050709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.050881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.050893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.051177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.051188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.051238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.051247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.051542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.051553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.051853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.051869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.052179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.052191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.052502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.052513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.052799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.052810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.052995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.053009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.053300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.053311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.053635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.053646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.053840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.053852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.054062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.054073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.054378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.054389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.054695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.054706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.055021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.055032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.055323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.055334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.055519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.055531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.055816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.055828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.056156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.056168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.056506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.056519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.056691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.056702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.056997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.057009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.057313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.057324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.057627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.057638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.057812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.057824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.058007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.058018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.058350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.058361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.058694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.058705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.059009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.059020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.059322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.059334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.059620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.059631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.059948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.059959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.060297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.060308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.060492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.060503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.060768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.060779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.060826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.060836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.061150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.061162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.061508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.061519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.061831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.061843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.062190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.062201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.675 qpair failed and we were unable to recover it. 00:31:36.675 [2024-12-05 21:24:38.062506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.675 [2024-12-05 21:24:38.062517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.062834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.062845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.063149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.063161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.063473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.063485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.063635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.063647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.063963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.063975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.064311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.064321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.064633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.064644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.064817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.064830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.065171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.065183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.065367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.065379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.065696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.065707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.065887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.065898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.066196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.066207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.066522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.066533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.066822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.066834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.067190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.067201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.067255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.067265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.067561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.067572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.676 [2024-12-05 21:24:38.067892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.676 [2024-12-05 21:24:38.067903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.676 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.068214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.068228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.068558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.068569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.068753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.068765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.069157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.069169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.069471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.069483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.069662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.069672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.069844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.069854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.070168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.070179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.070543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.070554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.070856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.070870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.071205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.071215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.071555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.071567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.071906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.071919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.072235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.072246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.072578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.072589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.072900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.072911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.073245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.073256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.073591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.073602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.073958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.073969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.074344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.074356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.074690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.074701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.075017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.075028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.075210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.075222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.075383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.075394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.075671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.075683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.075869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.075881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.076046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.076057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.076109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.076120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.076428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.076439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.076750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.957 [2024-12-05 21:24:38.076762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.957 qpair failed and we were unable to recover it. 00:31:36.957 [2024-12-05 21:24:38.077127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.077138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.077316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.077327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.077640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.077651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.077840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.077851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.078200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.078211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.078516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.078526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.078712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.078724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.079031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.079043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.079216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.079228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.079547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.079557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.079868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.079880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.080189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.080200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.080481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.080493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.080845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.080858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.081157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.081168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.081408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.081419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.081730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.081742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.081931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.081942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.082131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.082141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.082448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.082459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.082645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.082656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.082967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.082978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.083163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.083175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.083531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.083542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.083846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.083858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.084190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.084202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.084499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.084512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.084813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.084824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.085134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.085145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.085453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.085464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.085809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.085820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.085988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.086000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.086421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.086433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.086775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.086786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.086982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.086995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.087302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.087314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.087495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.087506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.087708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.958 [2024-12-05 21:24:38.087719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.958 qpair failed and we were unable to recover it. 00:31:36.958 [2024-12-05 21:24:38.088036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.088047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.088383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.088394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.088578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.088590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.088915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.088927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.089253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.089264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.089580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.089591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.089894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.089906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.090222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.090233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.090544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.090555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.090875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.090886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.091204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.091216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.091545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.091557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.091716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.091728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.092100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.092112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.092412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.092423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.092693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.092705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.093073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.093085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.093399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.093410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.093710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.093721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.094038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.094051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.094378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.094389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.094696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.094708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.094899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.094910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.095244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.095255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.095469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.095480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.095676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.095686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.095736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.095747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.095972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.095983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.096331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.096341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.096684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.096696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.096994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.097006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.097342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.097353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.097530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.097542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.097878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.097890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.097981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.097992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.098296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.098307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.098617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.098628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.959 [2024-12-05 21:24:38.098833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.959 [2024-12-05 21:24:38.098844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.959 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.099155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.099167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.099344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.099356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.099637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.099649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.099782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.099793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.099986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.099998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.100342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.100353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.100640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.100652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.100958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.100970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.101293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.101304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.101641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.101652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.101837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.101850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.102173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.102185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.102521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.102533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.102843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.102855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.103185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.103198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.103535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.103548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.103738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.103749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.103930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.103942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.104248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.104261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.104556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.104568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.104867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.104879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.105036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.105048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.105341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.105353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.105683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.105696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.106011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.106023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.106194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.106206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.106503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.106515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.106699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.106712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.106902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.106914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.107201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.107212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.107562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.107574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.107914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.107926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.108236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.108248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.108429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.108441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.108631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.108642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.109018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.109031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.109334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.109346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.960 [2024-12-05 21:24:38.109651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.960 [2024-12-05 21:24:38.109663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.960 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.109996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.110007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.110297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.110308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.110612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.110624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.110937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.110948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.110997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.111007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.111322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.111333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.111406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.111416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.111690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.111702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.111885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.111897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.112258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.112269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.112589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.112601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.112893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.112905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.113084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.113094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.113368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.113378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.113712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.113723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.113882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.113893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.114223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.114235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.114557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.114569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.114913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.114924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.115126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.115137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.115357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.115370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.115714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.115727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.115912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.115925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.115970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.115982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.116335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.116346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.116525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.116537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.116870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.116881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.117198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.117210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.117397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.117409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.117449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.117459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.117613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.117625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.961 qpair failed and we were unable to recover it. 00:31:36.961 [2024-12-05 21:24:38.117812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.961 [2024-12-05 21:24:38.117824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.117967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.117978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.118374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.118385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.118719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.118731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.119038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.119050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.119409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.119421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.119724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.119734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.120032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.120044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.120221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.120233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.120428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.120439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.120732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.120743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.120960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.120971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.121352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.121364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.121578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.121589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.121905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.121916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.122243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.122254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.122586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.122597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.122776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.122790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.123091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.123103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.123416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.123427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.123730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.123743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.123927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.123939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.124138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.124149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.124438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.124450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.124797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.124808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.125146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.125158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.125482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.125493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.125831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.125842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.126206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.126218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.126547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.126558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.126734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.126745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.127034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.127046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.127372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.127383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.127686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.127698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.127981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.127993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.128168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.128180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.128541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.128552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.128740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.128751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.962 [2024-12-05 21:24:38.129105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.962 [2024-12-05 21:24:38.129117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.962 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.129432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.129443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.129611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.129623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.129808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.129819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.130065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.130076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.130245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.130256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.130635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.130648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.130980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.130991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.131303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.131314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.131624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.131635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.131933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.131945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.132243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.132254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.132605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.132617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.132958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.132970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.133256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.133268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.133576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.133589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.133635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.133646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.133946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.133957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.134267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.134278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.134442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.134453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.134765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.134778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.135099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.135111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.135391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.135402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.135606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.135617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.135828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.135840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.136138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.136149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.136498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.136510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.136839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.136851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.137070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.137081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.137308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.137318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.137517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.137529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.137816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.137828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.137990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.138001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.138244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.138255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.138563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.138575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.138896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.138906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.139202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.139213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.139539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.139550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.139878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.139890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.963 qpair failed and we were unable to recover it. 00:31:36.963 [2024-12-05 21:24:38.140191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.963 [2024-12-05 21:24:38.140202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.140370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.140382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.140664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.140676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.141042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.141053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.141214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.141225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.141531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.141542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.141815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.141826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.142153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.142164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.142354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.142367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.142664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.142674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.143008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.143020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.143373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.143385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.143696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.143706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.143993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.144004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.144378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.144389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.144776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.144787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.145100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.145112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.145294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.145305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.145594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.145605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.145794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.145806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.146154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.146165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.146494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.146505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.146859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.146874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.147186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.147197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.147322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.147332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.147645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.147656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.147813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.147825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.148155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.148166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.148483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.148495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.148839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.148851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.149067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.149078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.149381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.149393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.149479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.149490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.149767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.149780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.149949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.149960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.150296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.150307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.150492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.150505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.150673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.150685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.150869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.964 [2024-12-05 21:24:38.150881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.964 qpair failed and we were unable to recover it. 00:31:36.964 [2024-12-05 21:24:38.151205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.151216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.151525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.151537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.151850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.151866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.152097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.152108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.152373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.152384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.152575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.152588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.152739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.152751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.152971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.152983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.153210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.153222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.153405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.153417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.153681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.153695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.154015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.154027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.154307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.154318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.154368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.154377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.154707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.154718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.155033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.155044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.155241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.155252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.155565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.155575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.155879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.155890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.156246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.156258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.156565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.156577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.156877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.156888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.157059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.157072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.157229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.157240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.157529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.157540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.157846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.157857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.158205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.158217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.158531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.158543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.158725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.158737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.158891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.158903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.159092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.159103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.159268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.159281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.159585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.159595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.159921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.159933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.160113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.160125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.160292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.160303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.160622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.160634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.160883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.160896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.161200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.965 [2024-12-05 21:24:38.161211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.965 qpair failed and we were unable to recover it. 00:31:36.965 [2024-12-05 21:24:38.161528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.161538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.161852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.161870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.162061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.162073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.162401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.162413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.162601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.162612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.162922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.162934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.163125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.163136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.163303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.163314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.163611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.163621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.163803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.163814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.163988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.163999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.164297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.164307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.164356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.164367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.164642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.164654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.164843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.164854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.165178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.165189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.165368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.165379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.165559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.165570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.165838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.165849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.166036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.166048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.166318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.166328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.166676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.166688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.166728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.166737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.167041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.167053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.167419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.167430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.167814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.167825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.168128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.168139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.168467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.168477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.168810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.168821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.169137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.169149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.169362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.169373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.169701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.169712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.170007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.170019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.170201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.966 [2024-12-05 21:24:38.170211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.966 qpair failed and we were unable to recover it. 00:31:36.966 [2024-12-05 21:24:38.170534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.170545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.170727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.170739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.170902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.170914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.171095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.171106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.171429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.171440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.171754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.171767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.172100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.172111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.172319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.172330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.172643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.172653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.172934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.172945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.173125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.173137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.173442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.173452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.173637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.173649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.173975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.173987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.174293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.174304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.174612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.174624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.967 [2024-12-05 21:24:38.174909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.174921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:36.967 [2024-12-05 21:24:38.175229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.175241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:36.967 [2024-12-05 21:24:38.175562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.175573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:36.967 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.967 [2024-12-05 21:24:38.175937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.175948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.176134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.176145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.176510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.176524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.176694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.176704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.177005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.177016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.177385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.177396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.177736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.177745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.177903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.177914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.178291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.178301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.178476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.178486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.178652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.178661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.178941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.178955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.179262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.179272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.179572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.179582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.179895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.179906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.179981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.179991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.180301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.180312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.967 qpair failed and we were unable to recover it. 00:31:36.967 [2024-12-05 21:24:38.180602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.967 [2024-12-05 21:24:38.180612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.181018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.181030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.181438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.181448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.181735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.181746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.182048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.182060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.182366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.182376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.182577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.182587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.182629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.182640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.182993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.183004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.183344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.183354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.183661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.183671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.183983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.183995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.184172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.184183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.184499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.184510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.184854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.184867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.185082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.185092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.185392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.185403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.185582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.185593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.185886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.185896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.186208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.186218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.186558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.186568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.186618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.186627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.186818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.186829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.187137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.187148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.187429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.187439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.187732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.187742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.188048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.188058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.188349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.188359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.188574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.188585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.188808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.188818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.189179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.189191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.189369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.189380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.189566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.189576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.189838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.189847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.190082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.190092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.190466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.190478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.190649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.190660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.190871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.190882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.191238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.968 [2024-12-05 21:24:38.191248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.968 qpair failed and we were unable to recover it. 00:31:36.968 [2024-12-05 21:24:38.191539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.191549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.191746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.191756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.192086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.192097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.192256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.192266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.192450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.192461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.192764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.192774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.192944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.192954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.193276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.193286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.193620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.193630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.193951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.193962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.194140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.194150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.194480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.194491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.194799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.194810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.195038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.195049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.195381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.195390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.195554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.195564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.195776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.195787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.196003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.196013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.196216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.196226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.196551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.196561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.196726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.196737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.197038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.197048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.197384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.197393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.197439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.197451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.197806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.197816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.198188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.198198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.198490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.198500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.198811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.198822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.199145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.199156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.199473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.199483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.199773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.199783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.200086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.200097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.200394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.200404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.200561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.200572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.200890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.200901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.201233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.201242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.201529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.201539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.201830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.201841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.202183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.969 [2024-12-05 21:24:38.202194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.969 qpair failed and we were unable to recover it. 00:31:36.969 [2024-12-05 21:24:38.202513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.202522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.202810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.202820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.203144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.203155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.203472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.203482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.203831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.203841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.204012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.204023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.204361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.204371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.204697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.204706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.205066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.205077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.205388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.205398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.205718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.205728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.206070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.206080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.206427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.206437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.206728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.206739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.207059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.207071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.207401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.207412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.207694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.207704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.207877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.207887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.208175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.208185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.208297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.208306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.208618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.208628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.208962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.208973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.209322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.209333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.209710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.209720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.209885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.209897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.210250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.210262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.210451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.210461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.210645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.210656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.210992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.211003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.211181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.211191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.211582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.211593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.211711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.211722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.212001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.212012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.212212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.970 [2024-12-05 21:24:38.212223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.970 qpair failed and we were unable to recover it. 00:31:36.970 [2024-12-05 21:24:38.212407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.212417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.212804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.212814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.213106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.213117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.213298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.213309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.213616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.213626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.213952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.213963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.214278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.214288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.214476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.214485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.214531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.214539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.214849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.214860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.215187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.215198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.215483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.215493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.971 [2024-12-05 21:24:38.215828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.215839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.216153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.216165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:36.971 [2024-12-05 21:24:38.216357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.216368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.216538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.216549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.971 [2024-12-05 21:24:38.216770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.216781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.971 [2024-12-05 21:24:38.216960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.216972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.217312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.217322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.217639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.217649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.218026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.218036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.218279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.218288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.218635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.218645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.218936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.218947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.219150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.219160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.219495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.219504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.219706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.219716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.220081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.220093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.220405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.220415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.220759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.220770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.220966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.220976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.221291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.221300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.221593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.221603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.221795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.221804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.222160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.222171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.222358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.222369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.222749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.971 [2024-12-05 21:24:38.222758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.971 qpair failed and we were unable to recover it. 00:31:36.971 [2024-12-05 21:24:38.222955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.222965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.223403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.223412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.223628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.223638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.223973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.223983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.224302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.224312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.224630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.224640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.224831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.224841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.225153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.225164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.225483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.225493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.225537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.225546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.225817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.225826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.226015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.226025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.226295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.226305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.226626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.226636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.226945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.226955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.227216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.227225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.227396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.227406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.227706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.227716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.228063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.228074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.228385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.228394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.228704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.228716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.229068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.229079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.229377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.229386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.229685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.229695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.229860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.229874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.230062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.230072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.230377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.230387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.230576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.230586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.230814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.230823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.231157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.231167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.231500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.231510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.231700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.231710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.232014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.232024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.232404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.232414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.232634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.232644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.232936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.232946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.233307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.233317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.233621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.233630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.972 [2024-12-05 21:24:38.233793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.972 [2024-12-05 21:24:38.233802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.972 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.233843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.233852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.234153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.234163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.234526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.234537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.234858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.234872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.235233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.235243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.235600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.235609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.235929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.235940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.236309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.236318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.236514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.236524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.236726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.236736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.237034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.237045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.237399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.237409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.237578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.237587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.237871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.237882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.238186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.238195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.238494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.238503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.238819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.238828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.238917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.238927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.239088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.239098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.239425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.239435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.239642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.239651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.240104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.240115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.240425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.240435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.240622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.240631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.240939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.240949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.241103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.241114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.241159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.241168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.241388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.241397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.241655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.241666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.241984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.241994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.242326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.242336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.242499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.242509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.242794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.242804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.243115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.243125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.243468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.243477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.243764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.243774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.244143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.244154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.244315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.244325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.973 qpair failed and we were unable to recover it. 00:31:36.973 [2024-12-05 21:24:38.244636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.973 [2024-12-05 21:24:38.244646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.244828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.244838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.245157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.245168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.245347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.245356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.245520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.245530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.245698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.245709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.246169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.246179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.246521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.246531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.246885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.246895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.247220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.247230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.247407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.247417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.247641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.247654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.247830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.247839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.248147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.248158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.248206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.248215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.248401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.248411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.248718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.248728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.249070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.249081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.249135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.249145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.249317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.249327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.249649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.249660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.249857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.249873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.250214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.250224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.250561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.250570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.250891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.250901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.251264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.251275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.251591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.251601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.251883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.251893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.252187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.252197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.252514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.252524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.252686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.252695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 Malloc0 00:31:36.974 [2024-12-05 21:24:38.253054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.253065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.253391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.253401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.253690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.253700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.974 [2024-12-05 21:24:38.253869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.253879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.254154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.254164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 [2024-12-05 21:24:38.254342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.254352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.974 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.974 [2024-12-05 21:24:38.254670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.974 [2024-12-05 21:24:38.254681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.974 qpair failed and we were unable to recover it. 00:31:36.975 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.975 [2024-12-05 21:24:38.254989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.255000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.255337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.255346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.255669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.255679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.256014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.256026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.256325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.256335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.256572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.256582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.256761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.256771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.256994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.257005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.257062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.257071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.257385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.257395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.257565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.257575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.257856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.257869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.258160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.258172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.258339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.258349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.258695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.258705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.259025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.259035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.259268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.259278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.259599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.259610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.259800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.259810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.260130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.260141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.260482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.260491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.260492] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.975 [2024-12-05 21:24:38.260829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.260838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.261155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.261165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.261332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.261342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.261656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.261666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.261988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.261999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.262301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.262310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.262709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.262718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.263017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.263027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.263348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.263357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.263653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.263663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.264023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.264034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.264341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.975 [2024-12-05 21:24:38.264352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.975 qpair failed and we were unable to recover it. 00:31:36.975 [2024-12-05 21:24:38.264727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.264736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.264953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.264963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.265152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.265162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.265504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.265514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.265818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.265828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.266137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.266148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.266429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.266444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.266769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.266779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.267031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.267041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.267363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.267373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.267669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.267680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.267855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.267869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.268175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.268185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.268506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.268515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.268660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.268681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.268991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.269002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.269181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.269190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.976 [2024-12-05 21:24:38.269591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.269601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.269782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.269792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:36.976 [2024-12-05 21:24:38.270023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.270033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.976 [2024-12-05 21:24:38.270374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.270384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.976 [2024-12-05 21:24:38.270579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.270590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.270783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.270793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.270965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.270975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.271257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.271267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.271610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.271619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.271944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.271954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.272144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.272154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.272480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.272490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.272779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.272789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.273041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.273051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.273364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.273374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.273676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.273686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.273978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.273988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.274327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.274337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.274624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.274634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.274852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.976 [2024-12-05 21:24:38.274866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.976 qpair failed and we were unable to recover it. 00:31:36.976 [2024-12-05 21:24:38.275182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.275192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.275383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.275392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.275758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.275768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.276065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.276075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.276364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.276374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.276554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.276564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.276746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.276755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.277060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.277071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.277382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.277392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.277564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.277574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.277853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.277866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.278229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.278239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.278532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.278542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.278705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.278716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.279048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.279058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.279241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.279251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.279422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.279432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.279718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.279728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.280036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.280046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.280375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.280385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.280692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.280702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.281083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.281093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.281479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.281490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.977 [2024-12-05 21:24:38.281788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.281799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:36.977 [2024-12-05 21:24:38.282123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.282134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.977 [2024-12-05 21:24:38.282299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.282309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.977 [2024-12-05 21:24:38.282492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.282502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.282700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.282711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.282984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.282994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.283344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.283355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.283696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.283705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.283876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.283887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.284091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.284101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.284416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.284428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.284781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.284791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.285160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.285170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.285516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.977 [2024-12-05 21:24:38.285525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.977 qpair failed and we were unable to recover it. 00:31:36.977 [2024-12-05 21:24:38.285814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.285824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.286152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.286162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.286209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.286219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.286520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.286530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.286811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.286820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.287010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.287021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.287305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.287315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.287662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.287673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.288013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.288024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.288321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.288331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.288633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.288643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.288981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.288992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.289037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.289046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.289369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.289379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.289601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.289611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.289933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.289944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.290121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.290131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.290448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.290458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.290796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.290806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.291164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.291175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.291510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.291520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.291706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.291717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.291918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.291928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.292243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.292253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.292591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.292600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.292896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.292906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.293215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.293225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.293532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.293542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.978 [2024-12-05 21:24:38.293850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.293860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:36.978 [2024-12-05 21:24:38.294164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.294174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.294341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.978 [2024-12-05 21:24:38.294352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.294522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.294531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.978 [2024-12-05 21:24:38.294925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.294936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.295300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.295310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.295529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.295539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.295901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.295915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.296092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.978 [2024-12-05 21:24:38.296102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.978 qpair failed and we were unable to recover it. 00:31:36.978 [2024-12-05 21:24:38.296420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.296430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.296703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.296713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.297001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.297011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.297328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.297338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.297675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.297685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.297971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.297981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.298175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.298185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.298494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.298504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.298701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.298711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.299116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.299126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.299325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.299336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.299654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.299667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.299841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.299852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.300029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.300038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.300414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.300424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.300714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.979 [2024-12-05 21:24:38.300723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf72490 with addr=10.0.0.2, port=4420 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.300754] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.979 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.979 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:36.979 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.979 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.979 [2024-12-05 21:24:38.311465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.979 [2024-12-05 21:24:38.311547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.979 [2024-12-05 21:24:38.311565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.979 [2024-12-05 21:24:38.311573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.979 [2024-12-05 21:24:38.311580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:36.979 [2024-12-05 21:24:38.311599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.979 21:24:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2303715 00:31:36.979 [2024-12-05 21:24:38.321378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.979 [2024-12-05 21:24:38.321448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.979 [2024-12-05 21:24:38.321463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.979 [2024-12-05 21:24:38.321471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.979 [2024-12-05 21:24:38.321477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:36.979 [2024-12-05 21:24:38.321492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.331368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.979 [2024-12-05 21:24:38.331432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.979 [2024-12-05 21:24:38.331447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.979 [2024-12-05 21:24:38.331454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.979 [2024-12-05 21:24:38.331461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:36.979 [2024-12-05 21:24:38.331475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.341395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.979 [2024-12-05 21:24:38.341455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.979 [2024-12-05 21:24:38.341470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.979 [2024-12-05 21:24:38.341477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.979 [2024-12-05 21:24:38.341483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:36.979 [2024-12-05 21:24:38.341497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.351256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.979 [2024-12-05 21:24:38.351312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.979 [2024-12-05 21:24:38.351326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.979 [2024-12-05 21:24:38.351334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.979 [2024-12-05 21:24:38.351340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:36.979 [2024-12-05 21:24:38.351354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.361227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.979 [2024-12-05 21:24:38.361287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.979 [2024-12-05 21:24:38.361300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.979 [2024-12-05 21:24:38.361307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.979 [2024-12-05 21:24:38.361314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:36.979 [2024-12-05 21:24:38.361328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.979 qpair failed and we were unable to recover it. 00:31:36.979 [2024-12-05 21:24:38.371367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.979 [2024-12-05 21:24:38.371430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.980 [2024-12-05 21:24:38.371448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.980 [2024-12-05 21:24:38.371455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.980 [2024-12-05 21:24:38.371462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:36.980 [2024-12-05 21:24:38.371476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.980 qpair failed and we were unable to recover it. 00:31:37.243 [2024-12-05 21:24:38.381395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.243 [2024-12-05 21:24:38.381487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.243 [2024-12-05 21:24:38.381502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.243 [2024-12-05 21:24:38.381510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.243 [2024-12-05 21:24:38.381516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.243 [2024-12-05 21:24:38.381530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.243 qpair failed and we were unable to recover it. 00:31:37.243 [2024-12-05 21:24:38.391463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.243 [2024-12-05 21:24:38.391520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.243 [2024-12-05 21:24:38.391534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.243 [2024-12-05 21:24:38.391541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.243 [2024-12-05 21:24:38.391548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.243 [2024-12-05 21:24:38.391561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.243 qpair failed and we were unable to recover it. 00:31:37.243 [2024-12-05 21:24:38.401495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.243 [2024-12-05 21:24:38.401551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.243 [2024-12-05 21:24:38.401577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.243 [2024-12-05 21:24:38.401585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.243 [2024-12-05 21:24:38.401593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.243 [2024-12-05 21:24:38.401613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.243 qpair failed and we were unable to recover it. 00:31:37.243 [2024-12-05 21:24:38.411510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.243 [2024-12-05 21:24:38.411564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.243 [2024-12-05 21:24:38.411579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.243 [2024-12-05 21:24:38.411587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.243 [2024-12-05 21:24:38.411594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.243 [2024-12-05 21:24:38.411613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.243 qpair failed and we were unable to recover it. 00:31:37.243 [2024-12-05 21:24:38.421520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.243 [2024-12-05 21:24:38.421579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.243 [2024-12-05 21:24:38.421593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.243 [2024-12-05 21:24:38.421600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.243 [2024-12-05 21:24:38.421607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.244 [2024-12-05 21:24:38.421621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.244 qpair failed and we were unable to recover it. 00:31:37.244 [2024-12-05 21:24:38.431557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.244 [2024-12-05 21:24:38.431649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.244 [2024-12-05 21:24:38.431664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.244 [2024-12-05 21:24:38.431672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.244 [2024-12-05 21:24:38.431679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.244 [2024-12-05 21:24:38.431693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.244 qpair failed and we were unable to recover it. 00:31:37.244 [2024-12-05 21:24:38.441581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.244 [2024-12-05 21:24:38.441637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.244 [2024-12-05 21:24:38.441651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.244 [2024-12-05 21:24:38.441658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.244 [2024-12-05 21:24:38.441665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.244 [2024-12-05 21:24:38.441679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.244 qpair failed and we were unable to recover it. 00:31:37.244 [2024-12-05 21:24:38.451592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.244 [2024-12-05 21:24:38.451678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.244 [2024-12-05 21:24:38.451693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.244 [2024-12-05 21:24:38.451701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.244 [2024-12-05 21:24:38.451707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.244 [2024-12-05 21:24:38.451721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.244 qpair failed and we were unable to recover it. 00:31:37.244 [2024-12-05 21:24:38.461636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.244 [2024-12-05 21:24:38.461696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.244 [2024-12-05 21:24:38.461711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.244 [2024-12-05 21:24:38.461718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.244 [2024-12-05 21:24:38.461725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.244 [2024-12-05 21:24:38.461739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.244 qpair failed and we were unable to recover it. 00:31:37.244 [2024-12-05 21:24:38.471646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.244 [2024-12-05 21:24:38.471705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.244 [2024-12-05 21:24:38.471719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.244 [2024-12-05 21:24:38.471726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.244 [2024-12-05 21:24:38.471732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.244 [2024-12-05 21:24:38.471746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.244 qpair failed and we were unable to recover it. 00:31:37.244 [2024-12-05 21:24:38.481676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.244 [2024-12-05 21:24:38.481724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.244 [2024-12-05 21:24:38.481739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.244 [2024-12-05 21:24:38.481746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.244 [2024-12-05 21:24:38.481752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.244 [2024-12-05 21:24:38.481766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.244 qpair failed and we were unable to recover it. 00:31:37.244 [2024-12-05 21:24:38.491618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.244 [2024-12-05 21:24:38.491685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.244 [2024-12-05 21:24:38.491700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.244 [2024-12-05 21:24:38.491707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.244 [2024-12-05 21:24:38.491713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.244 [2024-12-05 21:24:38.491727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.244 qpair failed and we were unable to recover it. 00:31:37.244 [2024-12-05 21:24:38.501782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.244 [2024-12-05 21:24:38.501840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.244 [2024-12-05 21:24:38.501857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.244 [2024-12-05 21:24:38.501871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.244 [2024-12-05 21:24:38.501878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.244 [2024-12-05 21:24:38.501892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.244 qpair failed and we were unable to recover it. 00:31:37.244 [2024-12-05 21:24:38.511675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.244 [2024-12-05 21:24:38.511732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.244 [2024-12-05 21:24:38.511746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.244 [2024-12-05 21:24:38.511753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.244 [2024-12-05 21:24:38.511760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.244 [2024-12-05 21:24:38.511774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.244 qpair failed and we were unable to recover it. 00:31:37.244 [2024-12-05 21:24:38.521852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.244 [2024-12-05 21:24:38.521931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.244 [2024-12-05 21:24:38.521945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.244 [2024-12-05 21:24:38.521953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.244 [2024-12-05 21:24:38.521959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.244 [2024-12-05 21:24:38.521974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.244 qpair failed and we were unable to recover it. 00:31:37.244 [2024-12-05 21:24:38.531940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.244 [2024-12-05 21:24:38.532003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.244 [2024-12-05 21:24:38.532019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.244 [2024-12-05 21:24:38.532026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.244 [2024-12-05 21:24:38.532033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.244 [2024-12-05 21:24:38.532051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.244 qpair failed and we were unable to recover it. 00:31:37.244 [2024-12-05 21:24:38.541930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.244 [2024-12-05 21:24:38.541985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.244 [2024-12-05 21:24:38.542000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.244 [2024-12-05 21:24:38.542008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.244 [2024-12-05 21:24:38.542015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.244 [2024-12-05 21:24:38.542033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.244 qpair failed and we were unable to recover it. 00:31:37.244 [2024-12-05 21:24:38.551943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.244 [2024-12-05 21:24:38.551999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.244 [2024-12-05 21:24:38.552015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.245 [2024-12-05 21:24:38.552022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.245 [2024-12-05 21:24:38.552028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.245 [2024-12-05 21:24:38.552044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.245 qpair failed and we were unable to recover it. 00:31:37.245 [2024-12-05 21:24:38.561996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.245 [2024-12-05 21:24:38.562061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.245 [2024-12-05 21:24:38.562076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.245 [2024-12-05 21:24:38.562084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.245 [2024-12-05 21:24:38.562090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.245 [2024-12-05 21:24:38.562104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.245 qpair failed and we were unable to recover it. 00:31:37.245 [2024-12-05 21:24:38.571949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.245 [2024-12-05 21:24:38.572034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.245 [2024-12-05 21:24:38.572048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.245 [2024-12-05 21:24:38.572056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.245 [2024-12-05 21:24:38.572062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.245 [2024-12-05 21:24:38.572076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.245 qpair failed and we were unable to recover it. 00:31:37.245 [2024-12-05 21:24:38.581995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.245 [2024-12-05 21:24:38.582056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.245 [2024-12-05 21:24:38.582070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.245 [2024-12-05 21:24:38.582078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.245 [2024-12-05 21:24:38.582084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.245 [2024-12-05 21:24:38.582098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.245 qpair failed and we were unable to recover it. 00:31:37.245 [2024-12-05 21:24:38.591932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.245 [2024-12-05 21:24:38.591990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.245 [2024-12-05 21:24:38.592004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.245 [2024-12-05 21:24:38.592012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.245 [2024-12-05 21:24:38.592018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.245 [2024-12-05 21:24:38.592032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.245 qpair failed and we were unable to recover it. 00:31:37.245 [2024-12-05 21:24:38.602045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.245 [2024-12-05 21:24:38.602101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.245 [2024-12-05 21:24:38.602116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.245 [2024-12-05 21:24:38.602123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.245 [2024-12-05 21:24:38.602130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.245 [2024-12-05 21:24:38.602144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.245 qpair failed and we were unable to recover it. 00:31:37.245 [2024-12-05 21:24:38.612100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.245 [2024-12-05 21:24:38.612176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.245 [2024-12-05 21:24:38.612190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.245 [2024-12-05 21:24:38.612197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.245 [2024-12-05 21:24:38.612205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.245 [2024-12-05 21:24:38.612219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.245 qpair failed and we were unable to recover it. 00:31:37.245 [2024-12-05 21:24:38.622102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.245 [2024-12-05 21:24:38.622162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.245 [2024-12-05 21:24:38.622175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.245 [2024-12-05 21:24:38.622183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.245 [2024-12-05 21:24:38.622190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.245 [2024-12-05 21:24:38.622204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.245 qpair failed and we were unable to recover it. 00:31:37.245 [2024-12-05 21:24:38.632100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.245 [2024-12-05 21:24:38.632202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.245 [2024-12-05 21:24:38.632220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.245 [2024-12-05 21:24:38.632227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.245 [2024-12-05 21:24:38.632234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.245 [2024-12-05 21:24:38.632248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.245 qpair failed and we were unable to recover it. 00:31:37.245 [2024-12-05 21:24:38.642150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.245 [2024-12-05 21:24:38.642201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.245 [2024-12-05 21:24:38.642215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.245 [2024-12-05 21:24:38.642222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.245 [2024-12-05 21:24:38.642229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.245 [2024-12-05 21:24:38.642243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.245 qpair failed and we were unable to recover it. 00:31:37.245 [2024-12-05 21:24:38.652041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.245 [2024-12-05 21:24:38.652096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.245 [2024-12-05 21:24:38.652110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.245 [2024-12-05 21:24:38.652117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.245 [2024-12-05 21:24:38.652124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.245 [2024-12-05 21:24:38.652138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.245 qpair failed and we were unable to recover it. 00:31:37.245 [2024-12-05 21:24:38.662197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.245 [2024-12-05 21:24:38.662250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.245 [2024-12-05 21:24:38.662264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.245 [2024-12-05 21:24:38.662271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.245 [2024-12-05 21:24:38.662278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.245 [2024-12-05 21:24:38.662291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.245 qpair failed and we were unable to recover it. 00:31:37.245 [2024-12-05 21:24:38.672111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.245 [2024-12-05 21:24:38.672178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.245 [2024-12-05 21:24:38.672192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.245 [2024-12-05 21:24:38.672200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.245 [2024-12-05 21:24:38.672206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.245 [2024-12-05 21:24:38.672224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.245 qpair failed and we were unable to recover it. 00:31:37.508 [2024-12-05 21:24:38.682227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.509 [2024-12-05 21:24:38.682284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.509 [2024-12-05 21:24:38.682297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.509 [2024-12-05 21:24:38.682304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.509 [2024-12-05 21:24:38.682311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.509 [2024-12-05 21:24:38.682325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.509 qpair failed and we were unable to recover it. 00:31:37.509 [2024-12-05 21:24:38.692255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.509 [2024-12-05 21:24:38.692312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.509 [2024-12-05 21:24:38.692328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.509 [2024-12-05 21:24:38.692335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.509 [2024-12-05 21:24:38.692343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.509 [2024-12-05 21:24:38.692357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.509 qpair failed and we were unable to recover it. 00:31:37.509 [2024-12-05 21:24:38.702325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.509 [2024-12-05 21:24:38.702387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.509 [2024-12-05 21:24:38.702400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.509 [2024-12-05 21:24:38.702408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.509 [2024-12-05 21:24:38.702414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.509 [2024-12-05 21:24:38.702428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.509 qpair failed and we were unable to recover it. 00:31:37.509 [2024-12-05 21:24:38.712349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.509 [2024-12-05 21:24:38.712404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.509 [2024-12-05 21:24:38.712417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.509 [2024-12-05 21:24:38.712425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.509 [2024-12-05 21:24:38.712431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.509 [2024-12-05 21:24:38.712445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.509 qpair failed and we were unable to recover it. 00:31:37.509 [2024-12-05 21:24:38.722349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.509 [2024-12-05 21:24:38.722400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.509 [2024-12-05 21:24:38.722413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.509 [2024-12-05 21:24:38.722421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.509 [2024-12-05 21:24:38.722427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.509 [2024-12-05 21:24:38.722442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.509 qpair failed and we were unable to recover it. 00:31:37.509 [2024-12-05 21:24:38.732390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.509 [2024-12-05 21:24:38.732449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.509 [2024-12-05 21:24:38.732463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.509 [2024-12-05 21:24:38.732471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.509 [2024-12-05 21:24:38.732477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.509 [2024-12-05 21:24:38.732491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.509 qpair failed and we were unable to recover it. 00:31:37.509 [2024-12-05 21:24:38.742420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.509 [2024-12-05 21:24:38.742474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.509 [2024-12-05 21:24:38.742487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.509 [2024-12-05 21:24:38.742494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.509 [2024-12-05 21:24:38.742501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.509 [2024-12-05 21:24:38.742515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.509 qpair failed and we were unable to recover it. 00:31:37.509 [2024-12-05 21:24:38.752463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.509 [2024-12-05 21:24:38.752518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.509 [2024-12-05 21:24:38.752531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.509 [2024-12-05 21:24:38.752538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.509 [2024-12-05 21:24:38.752545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.509 [2024-12-05 21:24:38.752558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.509 qpair failed and we were unable to recover it. 00:31:37.509 [2024-12-05 21:24:38.762453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.509 [2024-12-05 21:24:38.762507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.509 [2024-12-05 21:24:38.762524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.509 [2024-12-05 21:24:38.762531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.509 [2024-12-05 21:24:38.762538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.509 [2024-12-05 21:24:38.762552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.509 qpair failed and we were unable to recover it. 00:31:37.509 [2024-12-05 21:24:38.772562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.509 [2024-12-05 21:24:38.772626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.509 [2024-12-05 21:24:38.772640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.509 [2024-12-05 21:24:38.772648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.509 [2024-12-05 21:24:38.772655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.509 [2024-12-05 21:24:38.772669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.509 qpair failed and we were unable to recover it. 00:31:37.509 [2024-12-05 21:24:38.782539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.509 [2024-12-05 21:24:38.782605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.509 [2024-12-05 21:24:38.782630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.509 [2024-12-05 21:24:38.782639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.509 [2024-12-05 21:24:38.782646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.509 [2024-12-05 21:24:38.782667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.509 qpair failed and we were unable to recover it. 00:31:37.509 [2024-12-05 21:24:38.792556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.509 [2024-12-05 21:24:38.792627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.509 [2024-12-05 21:24:38.792652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.509 [2024-12-05 21:24:38.792661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.509 [2024-12-05 21:24:38.792669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.509 [2024-12-05 21:24:38.792689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.509 qpair failed and we were unable to recover it. 00:31:37.509 [2024-12-05 21:24:38.802583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.509 [2024-12-05 21:24:38.802648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.509 [2024-12-05 21:24:38.802673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.509 [2024-12-05 21:24:38.802681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.509 [2024-12-05 21:24:38.802690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.509 [2024-12-05 21:24:38.802714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.509 qpair failed and we were unable to recover it. 00:31:37.510 [2024-12-05 21:24:38.812603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.510 [2024-12-05 21:24:38.812663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.510 [2024-12-05 21:24:38.812679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.510 [2024-12-05 21:24:38.812686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.510 [2024-12-05 21:24:38.812693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.510 [2024-12-05 21:24:38.812709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.510 qpair failed and we were unable to recover it. 00:31:37.510 [2024-12-05 21:24:38.822646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.510 [2024-12-05 21:24:38.822718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.510 [2024-12-05 21:24:38.822732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.510 [2024-12-05 21:24:38.822740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.510 [2024-12-05 21:24:38.822747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.510 [2024-12-05 21:24:38.822761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.510 qpair failed and we were unable to recover it. 00:31:37.510 [2024-12-05 21:24:38.832667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.510 [2024-12-05 21:24:38.832722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.510 [2024-12-05 21:24:38.832736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.510 [2024-12-05 21:24:38.832744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.510 [2024-12-05 21:24:38.832751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.510 [2024-12-05 21:24:38.832765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.510 qpair failed and we were unable to recover it. 00:31:37.510 [2024-12-05 21:24:38.842690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.510 [2024-12-05 21:24:38.842748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.510 [2024-12-05 21:24:38.842762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.510 [2024-12-05 21:24:38.842770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.510 [2024-12-05 21:24:38.842776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.510 [2024-12-05 21:24:38.842790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.510 qpair failed and we were unable to recover it. 00:31:37.510 [2024-12-05 21:24:38.852697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.510 [2024-12-05 21:24:38.852757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.510 [2024-12-05 21:24:38.852771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.510 [2024-12-05 21:24:38.852779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.510 [2024-12-05 21:24:38.852785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.510 [2024-12-05 21:24:38.852799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.510 qpair failed and we were unable to recover it. 00:31:37.510 [2024-12-05 21:24:38.862797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.510 [2024-12-05 21:24:38.862856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.510 [2024-12-05 21:24:38.862873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.510 [2024-12-05 21:24:38.862881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.510 [2024-12-05 21:24:38.862888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.510 [2024-12-05 21:24:38.862902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.510 qpair failed and we were unable to recover it. 00:31:37.510 [2024-12-05 21:24:38.872788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.510 [2024-12-05 21:24:38.872842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.510 [2024-12-05 21:24:38.872856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.510 [2024-12-05 21:24:38.872867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.510 [2024-12-05 21:24:38.872874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.510 [2024-12-05 21:24:38.872888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.510 qpair failed and we were unable to recover it. 00:31:37.510 [2024-12-05 21:24:38.882796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.510 [2024-12-05 21:24:38.882855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.510 [2024-12-05 21:24:38.882873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.510 [2024-12-05 21:24:38.882880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.510 [2024-12-05 21:24:38.882887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.510 [2024-12-05 21:24:38.882901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.510 qpair failed and we were unable to recover it. 00:31:37.510 [2024-12-05 21:24:38.892835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.510 [2024-12-05 21:24:38.892938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.510 [2024-12-05 21:24:38.892956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.510 [2024-12-05 21:24:38.892963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.510 [2024-12-05 21:24:38.892970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.510 [2024-12-05 21:24:38.892984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.510 qpair failed and we were unable to recover it. 00:31:37.510 [2024-12-05 21:24:38.902859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.510 [2024-12-05 21:24:38.902928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.510 [2024-12-05 21:24:38.902944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.510 [2024-12-05 21:24:38.902951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.510 [2024-12-05 21:24:38.902958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.510 [2024-12-05 21:24:38.902977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.510 qpair failed and we were unable to recover it. 00:31:37.510 [2024-12-05 21:24:38.912911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.510 [2024-12-05 21:24:38.912995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.510 [2024-12-05 21:24:38.913009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.510 [2024-12-05 21:24:38.913017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.510 [2024-12-05 21:24:38.913024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.510 [2024-12-05 21:24:38.913039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.510 qpair failed and we were unable to recover it. 00:31:37.510 [2024-12-05 21:24:38.922918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.510 [2024-12-05 21:24:38.922975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.510 [2024-12-05 21:24:38.922989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.510 [2024-12-05 21:24:38.922996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.510 [2024-12-05 21:24:38.923003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.510 [2024-12-05 21:24:38.923017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.510 qpair failed and we were unable to recover it. 00:31:37.510 [2024-12-05 21:24:38.932957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.510 [2024-12-05 21:24:38.933013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.510 [2024-12-05 21:24:38.933027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.510 [2024-12-05 21:24:38.933034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.511 [2024-12-05 21:24:38.933041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.511 [2024-12-05 21:24:38.933058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.511 qpair failed and we were unable to recover it. 00:31:37.774 [2024-12-05 21:24:38.942949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.774 [2024-12-05 21:24:38.943005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.774 [2024-12-05 21:24:38.943019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.774 [2024-12-05 21:24:38.943026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.775 [2024-12-05 21:24:38.943033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.775 [2024-12-05 21:24:38.943047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.775 qpair failed and we were unable to recover it. 00:31:37.775 [2024-12-05 21:24:38.952983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.775 [2024-12-05 21:24:38.953055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.775 [2024-12-05 21:24:38.953069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.775 [2024-12-05 21:24:38.953076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.775 [2024-12-05 21:24:38.953083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.775 [2024-12-05 21:24:38.953097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.775 qpair failed and we were unable to recover it. 00:31:37.775 [2024-12-05 21:24:38.963025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.775 [2024-12-05 21:24:38.963105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.775 [2024-12-05 21:24:38.963118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.775 [2024-12-05 21:24:38.963126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.775 [2024-12-05 21:24:38.963133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.775 [2024-12-05 21:24:38.963147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.775 qpair failed and we were unable to recover it. 00:31:37.775 [2024-12-05 21:24:38.973100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.775 [2024-12-05 21:24:38.973163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.775 [2024-12-05 21:24:38.973176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.775 [2024-12-05 21:24:38.973184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.775 [2024-12-05 21:24:38.973190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.775 [2024-12-05 21:24:38.973204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.775 qpair failed and we were unable to recover it. 00:31:37.775 [2024-12-05 21:24:38.983093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.775 [2024-12-05 21:24:38.983149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.775 [2024-12-05 21:24:38.983163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.775 [2024-12-05 21:24:38.983170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.775 [2024-12-05 21:24:38.983177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.775 [2024-12-05 21:24:38.983191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.775 qpair failed and we were unable to recover it. 00:31:37.775 [2024-12-05 21:24:38.993114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.775 [2024-12-05 21:24:38.993182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.775 [2024-12-05 21:24:38.993196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.775 [2024-12-05 21:24:38.993203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.775 [2024-12-05 21:24:38.993210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.775 [2024-12-05 21:24:38.993224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.775 qpair failed and we were unable to recover it. 00:31:37.775 [2024-12-05 21:24:39.003176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.775 [2024-12-05 21:24:39.003235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.775 [2024-12-05 21:24:39.003249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.775 [2024-12-05 21:24:39.003256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.775 [2024-12-05 21:24:39.003263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.775 [2024-12-05 21:24:39.003276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.775 qpair failed and we were unable to recover it. 00:31:37.775 [2024-12-05 21:24:39.013159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.775 [2024-12-05 21:24:39.013217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.775 [2024-12-05 21:24:39.013231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.775 [2024-12-05 21:24:39.013238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.775 [2024-12-05 21:24:39.013245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.775 [2024-12-05 21:24:39.013258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.775 qpair failed and we were unable to recover it. 00:31:37.775 [2024-12-05 21:24:39.023201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.775 [2024-12-05 21:24:39.023279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.775 [2024-12-05 21:24:39.023296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.775 [2024-12-05 21:24:39.023303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.775 [2024-12-05 21:24:39.023311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.775 [2024-12-05 21:24:39.023325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.775 qpair failed and we were unable to recover it. 00:31:37.775 [2024-12-05 21:24:39.033196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.775 [2024-12-05 21:24:39.033247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.775 [2024-12-05 21:24:39.033261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.775 [2024-12-05 21:24:39.033269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.775 [2024-12-05 21:24:39.033276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.775 [2024-12-05 21:24:39.033289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.775 qpair failed and we were unable to recover it. 00:31:37.775 [2024-12-05 21:24:39.043246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.775 [2024-12-05 21:24:39.043299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.775 [2024-12-05 21:24:39.043313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.775 [2024-12-05 21:24:39.043320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.775 [2024-12-05 21:24:39.043327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.775 [2024-12-05 21:24:39.043340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.775 qpair failed and we were unable to recover it. 00:31:37.775 [2024-12-05 21:24:39.053269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.775 [2024-12-05 21:24:39.053325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.775 [2024-12-05 21:24:39.053339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.775 [2024-12-05 21:24:39.053346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.775 [2024-12-05 21:24:39.053352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.775 [2024-12-05 21:24:39.053365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.775 qpair failed and we were unable to recover it. 00:31:37.775 [2024-12-05 21:24:39.063299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.775 [2024-12-05 21:24:39.063379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.775 [2024-12-05 21:24:39.063393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.775 [2024-12-05 21:24:39.063400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.775 [2024-12-05 21:24:39.063407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.775 [2024-12-05 21:24:39.063424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.775 qpair failed and we were unable to recover it. 00:31:37.775 [2024-12-05 21:24:39.073229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.775 [2024-12-05 21:24:39.073355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.775 [2024-12-05 21:24:39.073369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.776 [2024-12-05 21:24:39.073377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.776 [2024-12-05 21:24:39.073384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.776 [2024-12-05 21:24:39.073397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.776 qpair failed and we were unable to recover it. 00:31:37.776 [2024-12-05 21:24:39.083233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.776 [2024-12-05 21:24:39.083290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.776 [2024-12-05 21:24:39.083304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.776 [2024-12-05 21:24:39.083312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.776 [2024-12-05 21:24:39.083318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.776 [2024-12-05 21:24:39.083332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.776 qpair failed and we were unable to recover it. 00:31:37.776 [2024-12-05 21:24:39.093362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.776 [2024-12-05 21:24:39.093419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.776 [2024-12-05 21:24:39.093432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.776 [2024-12-05 21:24:39.093440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.776 [2024-12-05 21:24:39.093446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.776 [2024-12-05 21:24:39.093460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.776 qpair failed and we were unable to recover it. 00:31:37.776 [2024-12-05 21:24:39.103383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.776 [2024-12-05 21:24:39.103449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.776 [2024-12-05 21:24:39.103463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.776 [2024-12-05 21:24:39.103470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.776 [2024-12-05 21:24:39.103476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.776 [2024-12-05 21:24:39.103490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.776 qpair failed and we were unable to recover it. 00:31:37.776 [2024-12-05 21:24:39.113454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.776 [2024-12-05 21:24:39.113515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.776 [2024-12-05 21:24:39.113529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.776 [2024-12-05 21:24:39.113536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.776 [2024-12-05 21:24:39.113542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.776 [2024-12-05 21:24:39.113556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.776 qpair failed and we were unable to recover it. 00:31:37.776 [2024-12-05 21:24:39.123516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.776 [2024-12-05 21:24:39.123575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.776 [2024-12-05 21:24:39.123588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.776 [2024-12-05 21:24:39.123596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.776 [2024-12-05 21:24:39.123603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.776 [2024-12-05 21:24:39.123616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.776 qpair failed and we were unable to recover it. 00:31:37.776 [2024-12-05 21:24:39.133484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.776 [2024-12-05 21:24:39.133535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.776 [2024-12-05 21:24:39.133549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.776 [2024-12-05 21:24:39.133557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.776 [2024-12-05 21:24:39.133563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.776 [2024-12-05 21:24:39.133577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.776 qpair failed and we were unable to recover it. 00:31:37.776 [2024-12-05 21:24:39.143525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.776 [2024-12-05 21:24:39.143582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.776 [2024-12-05 21:24:39.143596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.776 [2024-12-05 21:24:39.143603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.776 [2024-12-05 21:24:39.143610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.776 [2024-12-05 21:24:39.143623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.776 qpair failed and we were unable to recover it. 00:31:37.776 [2024-12-05 21:24:39.153559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.776 [2024-12-05 21:24:39.153622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.776 [2024-12-05 21:24:39.153651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.776 [2024-12-05 21:24:39.153660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.776 [2024-12-05 21:24:39.153669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.776 [2024-12-05 21:24:39.153688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.776 qpair failed and we were unable to recover it. 00:31:37.776 [2024-12-05 21:24:39.163573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.776 [2024-12-05 21:24:39.163629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.776 [2024-12-05 21:24:39.163655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.776 [2024-12-05 21:24:39.163664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.776 [2024-12-05 21:24:39.163671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.776 [2024-12-05 21:24:39.163692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.776 qpair failed and we were unable to recover it. 00:31:37.776 [2024-12-05 21:24:39.173584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.776 [2024-12-05 21:24:39.173647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.776 [2024-12-05 21:24:39.173672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.776 [2024-12-05 21:24:39.173680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.776 [2024-12-05 21:24:39.173688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.776 [2024-12-05 21:24:39.173708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.776 qpair failed and we were unable to recover it. 00:31:37.776 [2024-12-05 21:24:39.183612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.776 [2024-12-05 21:24:39.183681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.776 [2024-12-05 21:24:39.183696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.776 [2024-12-05 21:24:39.183703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.776 [2024-12-05 21:24:39.183710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.776 [2024-12-05 21:24:39.183725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.776 qpair failed and we were unable to recover it. 00:31:37.776 [2024-12-05 21:24:39.193678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.776 [2024-12-05 21:24:39.193733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.776 [2024-12-05 21:24:39.193748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.776 [2024-12-05 21:24:39.193755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.776 [2024-12-05 21:24:39.193761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.776 [2024-12-05 21:24:39.193780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.776 qpair failed and we were unable to recover it. 00:31:37.776 [2024-12-05 21:24:39.203688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.776 [2024-12-05 21:24:39.203744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.776 [2024-12-05 21:24:39.203759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.776 [2024-12-05 21:24:39.203766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.776 [2024-12-05 21:24:39.203773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:37.777 [2024-12-05 21:24:39.203787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.777 qpair failed and we were unable to recover it. 00:31:38.040 [2024-12-05 21:24:39.213702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.040 [2024-12-05 21:24:39.213794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.040 [2024-12-05 21:24:39.213809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.040 [2024-12-05 21:24:39.213816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.040 [2024-12-05 21:24:39.213824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.040 [2024-12-05 21:24:39.213838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.040 qpair failed and we were unable to recover it. 00:31:38.040 [2024-12-05 21:24:39.223725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.040 [2024-12-05 21:24:39.223786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.040 [2024-12-05 21:24:39.223800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.040 [2024-12-05 21:24:39.223807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.040 [2024-12-05 21:24:39.223814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.040 [2024-12-05 21:24:39.223828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.040 qpair failed and we were unable to recover it. 00:31:38.040 [2024-12-05 21:24:39.233672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.040 [2024-12-05 21:24:39.233766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.040 [2024-12-05 21:24:39.233780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.040 [2024-12-05 21:24:39.233789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.040 [2024-12-05 21:24:39.233796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.040 [2024-12-05 21:24:39.233810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.040 qpair failed and we were unable to recover it. 00:31:38.040 [2024-12-05 21:24:39.243772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.040 [2024-12-05 21:24:39.243827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.040 [2024-12-05 21:24:39.243841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.040 [2024-12-05 21:24:39.243848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.040 [2024-12-05 21:24:39.243855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.040 [2024-12-05 21:24:39.243874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.040 qpair failed and we were unable to recover it. 00:31:38.040 [2024-12-05 21:24:39.253859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.040 [2024-12-05 21:24:39.253920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.040 [2024-12-05 21:24:39.253933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.040 [2024-12-05 21:24:39.253940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.040 [2024-12-05 21:24:39.253946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.040 [2024-12-05 21:24:39.253961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.040 qpair failed and we were unable to recover it. 00:31:38.040 [2024-12-05 21:24:39.263850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.040 [2024-12-05 21:24:39.263915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.040 [2024-12-05 21:24:39.263929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.040 [2024-12-05 21:24:39.263937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.040 [2024-12-05 21:24:39.263943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.040 [2024-12-05 21:24:39.263957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.040 qpair failed and we were unable to recover it. 00:31:38.040 [2024-12-05 21:24:39.273901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.040 [2024-12-05 21:24:39.273960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.040 [2024-12-05 21:24:39.273974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.040 [2024-12-05 21:24:39.273982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.040 [2024-12-05 21:24:39.273990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.040 [2024-12-05 21:24:39.274004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.040 qpair failed and we were unable to recover it. 00:31:38.040 [2024-12-05 21:24:39.283776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.040 [2024-12-05 21:24:39.283843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.040 [2024-12-05 21:24:39.283857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.040 [2024-12-05 21:24:39.283872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.040 [2024-12-05 21:24:39.283878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.040 [2024-12-05 21:24:39.283892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.040 qpair failed and we were unable to recover it. 00:31:38.040 [2024-12-05 21:24:39.293826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.040 [2024-12-05 21:24:39.293883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.041 [2024-12-05 21:24:39.293897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.041 [2024-12-05 21:24:39.293904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.041 [2024-12-05 21:24:39.293911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.041 [2024-12-05 21:24:39.293925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.041 qpair failed and we were unable to recover it. 00:31:38.041 [2024-12-05 21:24:39.303964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.041 [2024-12-05 21:24:39.304018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.041 [2024-12-05 21:24:39.304032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.041 [2024-12-05 21:24:39.304039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.041 [2024-12-05 21:24:39.304046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.041 [2024-12-05 21:24:39.304060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.041 qpair failed and we were unable to recover it. 00:31:38.041 [2024-12-05 21:24:39.314021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.041 [2024-12-05 21:24:39.314078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.041 [2024-12-05 21:24:39.314092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.041 [2024-12-05 21:24:39.314099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.041 [2024-12-05 21:24:39.314106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.041 [2024-12-05 21:24:39.314120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.041 qpair failed and we were unable to recover it. 00:31:38.041 [2024-12-05 21:24:39.323893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.041 [2024-12-05 21:24:39.323982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.041 [2024-12-05 21:24:39.323996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.041 [2024-12-05 21:24:39.324004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.041 [2024-12-05 21:24:39.324010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.041 [2024-12-05 21:24:39.324028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.041 qpair failed and we were unable to recover it. 00:31:38.041 [2024-12-05 21:24:39.334045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.041 [2024-12-05 21:24:39.334096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.041 [2024-12-05 21:24:39.334110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.041 [2024-12-05 21:24:39.334118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.041 [2024-12-05 21:24:39.334124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.041 [2024-12-05 21:24:39.334138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.041 qpair failed and we were unable to recover it. 00:31:38.041 [2024-12-05 21:24:39.344052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.041 [2024-12-05 21:24:39.344110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.041 [2024-12-05 21:24:39.344124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.041 [2024-12-05 21:24:39.344131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.041 [2024-12-05 21:24:39.344138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.041 [2024-12-05 21:24:39.344152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.041 qpair failed and we were unable to recover it. 00:31:38.041 [2024-12-05 21:24:39.354114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.041 [2024-12-05 21:24:39.354171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.041 [2024-12-05 21:24:39.354185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.041 [2024-12-05 21:24:39.354192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.041 [2024-12-05 21:24:39.354199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.041 [2024-12-05 21:24:39.354213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.041 qpair failed and we were unable to recover it. 00:31:38.041 [2024-12-05 21:24:39.364154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.041 [2024-12-05 21:24:39.364208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.041 [2024-12-05 21:24:39.364221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.041 [2024-12-05 21:24:39.364229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.041 [2024-12-05 21:24:39.364235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.041 [2024-12-05 21:24:39.364249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.041 qpair failed and we were unable to recover it. 00:31:38.041 [2024-12-05 21:24:39.374170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.041 [2024-12-05 21:24:39.374224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.041 [2024-12-05 21:24:39.374238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.041 [2024-12-05 21:24:39.374246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.041 [2024-12-05 21:24:39.374252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.041 [2024-12-05 21:24:39.374266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.041 qpair failed and we were unable to recover it. 00:31:38.041 [2024-12-05 21:24:39.384194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.041 [2024-12-05 21:24:39.384253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.041 [2024-12-05 21:24:39.384266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.041 [2024-12-05 21:24:39.384273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.041 [2024-12-05 21:24:39.384280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.041 [2024-12-05 21:24:39.384294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.041 qpair failed and we were unable to recover it. 00:31:38.041 [2024-12-05 21:24:39.394235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.041 [2024-12-05 21:24:39.394294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.041 [2024-12-05 21:24:39.394307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.041 [2024-12-05 21:24:39.394315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.041 [2024-12-05 21:24:39.394321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.041 [2024-12-05 21:24:39.394335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.041 qpair failed and we were unable to recover it. 00:31:38.041 [2024-12-05 21:24:39.404241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.041 [2024-12-05 21:24:39.404296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.041 [2024-12-05 21:24:39.404309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.041 [2024-12-05 21:24:39.404317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.041 [2024-12-05 21:24:39.404324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.041 [2024-12-05 21:24:39.404337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.041 qpair failed and we were unable to recover it. 00:31:38.041 [2024-12-05 21:24:39.414239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.041 [2024-12-05 21:24:39.414297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.041 [2024-12-05 21:24:39.414310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.041 [2024-12-05 21:24:39.414321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.041 [2024-12-05 21:24:39.414328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.041 [2024-12-05 21:24:39.414342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.041 qpair failed and we were unable to recover it. 00:31:38.041 [2024-12-05 21:24:39.424315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.041 [2024-12-05 21:24:39.424391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.041 [2024-12-05 21:24:39.424405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.042 [2024-12-05 21:24:39.424413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.042 [2024-12-05 21:24:39.424419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.042 [2024-12-05 21:24:39.424434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.042 qpair failed and we were unable to recover it. 00:31:38.042 [2024-12-05 21:24:39.434326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.042 [2024-12-05 21:24:39.434384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.042 [2024-12-05 21:24:39.434398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.042 [2024-12-05 21:24:39.434406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.042 [2024-12-05 21:24:39.434412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.042 [2024-12-05 21:24:39.434426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.042 qpair failed and we were unable to recover it. 00:31:38.042 [2024-12-05 21:24:39.444381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.042 [2024-12-05 21:24:39.444436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.042 [2024-12-05 21:24:39.444449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.042 [2024-12-05 21:24:39.444457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.042 [2024-12-05 21:24:39.444464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.042 [2024-12-05 21:24:39.444477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.042 qpair failed and we were unable to recover it. 00:31:38.042 [2024-12-05 21:24:39.454341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.042 [2024-12-05 21:24:39.454395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.042 [2024-12-05 21:24:39.454409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.042 [2024-12-05 21:24:39.454416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.042 [2024-12-05 21:24:39.454422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.042 [2024-12-05 21:24:39.454439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.042 qpair failed and we were unable to recover it. 00:31:38.042 [2024-12-05 21:24:39.464295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.042 [2024-12-05 21:24:39.464357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.042 [2024-12-05 21:24:39.464371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.042 [2024-12-05 21:24:39.464378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.042 [2024-12-05 21:24:39.464385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.042 [2024-12-05 21:24:39.464398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.042 qpair failed and we were unable to recover it. 00:31:38.305 [2024-12-05 21:24:39.474311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.305 [2024-12-05 21:24:39.474367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.305 [2024-12-05 21:24:39.474382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.305 [2024-12-05 21:24:39.474390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.305 [2024-12-05 21:24:39.474397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.305 [2024-12-05 21:24:39.474411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.305 qpair failed and we were unable to recover it. 00:31:38.305 [2024-12-05 21:24:39.484459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.305 [2024-12-05 21:24:39.484510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.305 [2024-12-05 21:24:39.484524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.305 [2024-12-05 21:24:39.484531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.305 [2024-12-05 21:24:39.484537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.305 [2024-12-05 21:24:39.484551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.305 qpair failed and we were unable to recover it. 00:31:38.305 [2024-12-05 21:24:39.494514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.305 [2024-12-05 21:24:39.494584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.305 [2024-12-05 21:24:39.494598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.305 [2024-12-05 21:24:39.494606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.305 [2024-12-05 21:24:39.494613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.305 [2024-12-05 21:24:39.494627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.305 qpair failed and we were unable to recover it. 00:31:38.305 [2024-12-05 21:24:39.504517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.305 [2024-12-05 21:24:39.504569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.305 [2024-12-05 21:24:39.504583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.305 [2024-12-05 21:24:39.504591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.305 [2024-12-05 21:24:39.504598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.305 [2024-12-05 21:24:39.504612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.305 qpair failed and we were unable to recover it. 00:31:38.305 [2024-12-05 21:24:39.514550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.305 [2024-12-05 21:24:39.514607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.305 [2024-12-05 21:24:39.514621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.305 [2024-12-05 21:24:39.514628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.305 [2024-12-05 21:24:39.514635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.305 [2024-12-05 21:24:39.514649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.305 qpair failed and we were unable to recover it. 00:31:38.305 [2024-12-05 21:24:39.524619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.305 [2024-12-05 21:24:39.524675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.305 [2024-12-05 21:24:39.524690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.305 [2024-12-05 21:24:39.524698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.306 [2024-12-05 21:24:39.524706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.306 [2024-12-05 21:24:39.524720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.306 qpair failed and we were unable to recover it. 00:31:38.306 [2024-12-05 21:24:39.534591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.306 [2024-12-05 21:24:39.534647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.306 [2024-12-05 21:24:39.534672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.306 [2024-12-05 21:24:39.534682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.306 [2024-12-05 21:24:39.534689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.306 [2024-12-05 21:24:39.534709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.306 qpair failed and we were unable to recover it. 00:31:38.306 [2024-12-05 21:24:39.544638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.306 [2024-12-05 21:24:39.544694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.306 [2024-12-05 21:24:39.544709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.306 [2024-12-05 21:24:39.544722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.306 [2024-12-05 21:24:39.544729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.306 [2024-12-05 21:24:39.544744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.306 qpair failed and we were unable to recover it. 00:31:38.306 [2024-12-05 21:24:39.554661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.306 [2024-12-05 21:24:39.554719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.306 [2024-12-05 21:24:39.554735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.306 [2024-12-05 21:24:39.554743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.306 [2024-12-05 21:24:39.554750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.306 [2024-12-05 21:24:39.554769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.306 qpair failed and we were unable to recover it. 00:31:38.306 [2024-12-05 21:24:39.564669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.306 [2024-12-05 21:24:39.564727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.306 [2024-12-05 21:24:39.564742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.306 [2024-12-05 21:24:39.564750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.306 [2024-12-05 21:24:39.564757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.306 [2024-12-05 21:24:39.564771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.306 qpair failed and we were unable to recover it. 00:31:38.306 [2024-12-05 21:24:39.574722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.306 [2024-12-05 21:24:39.574814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.306 [2024-12-05 21:24:39.574828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.306 [2024-12-05 21:24:39.574836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.306 [2024-12-05 21:24:39.574842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.306 [2024-12-05 21:24:39.574857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.306 qpair failed and we were unable to recover it. 00:31:38.306 [2024-12-05 21:24:39.584742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.306 [2024-12-05 21:24:39.584800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.306 [2024-12-05 21:24:39.584813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.306 [2024-12-05 21:24:39.584821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.306 [2024-12-05 21:24:39.584827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.306 [2024-12-05 21:24:39.584845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.306 qpair failed and we were unable to recover it. 00:31:38.306 [2024-12-05 21:24:39.594772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.306 [2024-12-05 21:24:39.594837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.306 [2024-12-05 21:24:39.594851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.306 [2024-12-05 21:24:39.594858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.306 [2024-12-05 21:24:39.594868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.306 [2024-12-05 21:24:39.594882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.306 qpair failed and we were unable to recover it. 00:31:38.306 [2024-12-05 21:24:39.604805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.306 [2024-12-05 21:24:39.604903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.306 [2024-12-05 21:24:39.604917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.306 [2024-12-05 21:24:39.604925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.306 [2024-12-05 21:24:39.604932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.306 [2024-12-05 21:24:39.604946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.306 qpair failed and we were unable to recover it. 00:31:38.306 [2024-12-05 21:24:39.614700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.306 [2024-12-05 21:24:39.614768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.306 [2024-12-05 21:24:39.614782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.306 [2024-12-05 21:24:39.614789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.306 [2024-12-05 21:24:39.614796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.306 [2024-12-05 21:24:39.614809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.306 qpair failed and we were unable to recover it. 00:31:38.306 [2024-12-05 21:24:39.624740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.306 [2024-12-05 21:24:39.624801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.306 [2024-12-05 21:24:39.624815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.306 [2024-12-05 21:24:39.624822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.306 [2024-12-05 21:24:39.624829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.306 [2024-12-05 21:24:39.624843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.306 qpair failed and we were unable to recover it. 00:31:38.306 [2024-12-05 21:24:39.634894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.306 [2024-12-05 21:24:39.634950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.306 [2024-12-05 21:24:39.634965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.306 [2024-12-05 21:24:39.634972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.306 [2024-12-05 21:24:39.634979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.306 [2024-12-05 21:24:39.634993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.306 qpair failed and we were unable to recover it. 00:31:38.306 [2024-12-05 21:24:39.644904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.306 [2024-12-05 21:24:39.644961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.306 [2024-12-05 21:24:39.644975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.306 [2024-12-05 21:24:39.644983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.306 [2024-12-05 21:24:39.644990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.306 [2024-12-05 21:24:39.645003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.306 qpair failed and we were unable to recover it. 00:31:38.306 [2024-12-05 21:24:39.654923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.306 [2024-12-05 21:24:39.654977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.306 [2024-12-05 21:24:39.654991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.306 [2024-12-05 21:24:39.654998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.306 [2024-12-05 21:24:39.655005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.307 [2024-12-05 21:24:39.655019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.307 qpair failed and we were unable to recover it. 00:31:38.307 [2024-12-05 21:24:39.664843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.307 [2024-12-05 21:24:39.664949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.307 [2024-12-05 21:24:39.664963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.307 [2024-12-05 21:24:39.664971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.307 [2024-12-05 21:24:39.664977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.307 [2024-12-05 21:24:39.664991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.307 qpair failed and we were unable to recover it. 00:31:38.307 [2024-12-05 21:24:39.675003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.307 [2024-12-05 21:24:39.675059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.307 [2024-12-05 21:24:39.675073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.307 [2024-12-05 21:24:39.675084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.307 [2024-12-05 21:24:39.675091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.307 [2024-12-05 21:24:39.675105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.307 qpair failed and we were unable to recover it. 00:31:38.307 [2024-12-05 21:24:39.685021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.307 [2024-12-05 21:24:39.685071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.307 [2024-12-05 21:24:39.685085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.307 [2024-12-05 21:24:39.685092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.307 [2024-12-05 21:24:39.685099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.307 [2024-12-05 21:24:39.685113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.307 qpair failed and we were unable to recover it. 00:31:38.307 [2024-12-05 21:24:39.695040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.307 [2024-12-05 21:24:39.695127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.307 [2024-12-05 21:24:39.695141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.307 [2024-12-05 21:24:39.695149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.307 [2024-12-05 21:24:39.695155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.307 [2024-12-05 21:24:39.695169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.307 qpair failed and we were unable to recover it. 00:31:38.307 [2024-12-05 21:24:39.705111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.307 [2024-12-05 21:24:39.705175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.307 [2024-12-05 21:24:39.705188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.307 [2024-12-05 21:24:39.705196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.307 [2024-12-05 21:24:39.705202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.307 [2024-12-05 21:24:39.705216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.307 qpair failed and we were unable to recover it. 00:31:38.307 [2024-12-05 21:24:39.715096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.307 [2024-12-05 21:24:39.715155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.307 [2024-12-05 21:24:39.715171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.307 [2024-12-05 21:24:39.715178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.307 [2024-12-05 21:24:39.715185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.307 [2024-12-05 21:24:39.715203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.307 qpair failed and we were unable to recover it. 00:31:38.307 [2024-12-05 21:24:39.725102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.307 [2024-12-05 21:24:39.725156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.307 [2024-12-05 21:24:39.725170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.307 [2024-12-05 21:24:39.725177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.307 [2024-12-05 21:24:39.725184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.307 [2024-12-05 21:24:39.725198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.307 qpair failed and we were unable to recover it. 00:31:38.307 [2024-12-05 21:24:39.735148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.307 [2024-12-05 21:24:39.735204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.307 [2024-12-05 21:24:39.735219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.307 [2024-12-05 21:24:39.735226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.307 [2024-12-05 21:24:39.735234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.307 [2024-12-05 21:24:39.735248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.307 qpair failed and we were unable to recover it. 00:31:38.571 [2024-12-05 21:24:39.745186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.571 [2024-12-05 21:24:39.745243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.571 [2024-12-05 21:24:39.745257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.571 [2024-12-05 21:24:39.745264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.571 [2024-12-05 21:24:39.745271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.571 [2024-12-05 21:24:39.745285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.571 qpair failed and we were unable to recover it. 00:31:38.571 [2024-12-05 21:24:39.755246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.571 [2024-12-05 21:24:39.755311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.571 [2024-12-05 21:24:39.755325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.571 [2024-12-05 21:24:39.755332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.571 [2024-12-05 21:24:39.755339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.571 [2024-12-05 21:24:39.755352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.571 qpair failed and we were unable to recover it. 00:31:38.571 [2024-12-05 21:24:39.765258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.571 [2024-12-05 21:24:39.765316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.571 [2024-12-05 21:24:39.765330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.571 [2024-12-05 21:24:39.765337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.571 [2024-12-05 21:24:39.765344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.571 [2024-12-05 21:24:39.765357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.571 qpair failed and we were unable to recover it. 00:31:38.571 [2024-12-05 21:24:39.775273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.571 [2024-12-05 21:24:39.775325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.571 [2024-12-05 21:24:39.775339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.571 [2024-12-05 21:24:39.775346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.571 [2024-12-05 21:24:39.775353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.571 [2024-12-05 21:24:39.775367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.571 qpair failed and we were unable to recover it. 00:31:38.571 [2024-12-05 21:24:39.785292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.571 [2024-12-05 21:24:39.785350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.571 [2024-12-05 21:24:39.785363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.571 [2024-12-05 21:24:39.785370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.571 [2024-12-05 21:24:39.785377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.571 [2024-12-05 21:24:39.785390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.571 qpair failed and we were unable to recover it. 00:31:38.571 [2024-12-05 21:24:39.795209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.571 [2024-12-05 21:24:39.795286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.571 [2024-12-05 21:24:39.795299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.571 [2024-12-05 21:24:39.795307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.571 [2024-12-05 21:24:39.795313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.571 [2024-12-05 21:24:39.795328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.571 qpair failed and we were unable to recover it. 00:31:38.571 [2024-12-05 21:24:39.805363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.571 [2024-12-05 21:24:39.805418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.571 [2024-12-05 21:24:39.805432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.571 [2024-12-05 21:24:39.805442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.571 [2024-12-05 21:24:39.805449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.571 [2024-12-05 21:24:39.805463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.571 qpair failed and we were unable to recover it. 00:31:38.571 [2024-12-05 21:24:39.815353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.571 [2024-12-05 21:24:39.815409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.571 [2024-12-05 21:24:39.815423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.571 [2024-12-05 21:24:39.815430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.571 [2024-12-05 21:24:39.815437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.571 [2024-12-05 21:24:39.815451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.571 qpair failed and we were unable to recover it. 00:31:38.571 [2024-12-05 21:24:39.825430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.571 [2024-12-05 21:24:39.825485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.572 [2024-12-05 21:24:39.825499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.572 [2024-12-05 21:24:39.825506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.572 [2024-12-05 21:24:39.825513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.572 [2024-12-05 21:24:39.825526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.572 qpair failed and we were unable to recover it. 00:31:38.572 [2024-12-05 21:24:39.835458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.572 [2024-12-05 21:24:39.835515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.572 [2024-12-05 21:24:39.835530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.572 [2024-12-05 21:24:39.835538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.572 [2024-12-05 21:24:39.835546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.572 [2024-12-05 21:24:39.835560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.572 qpair failed and we were unable to recover it. 00:31:38.572 [2024-12-05 21:24:39.845469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.572 [2024-12-05 21:24:39.845525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.572 [2024-12-05 21:24:39.845539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.572 [2024-12-05 21:24:39.845547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.572 [2024-12-05 21:24:39.845553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.572 [2024-12-05 21:24:39.845571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.572 qpair failed and we were unable to recover it. 00:31:38.572 [2024-12-05 21:24:39.855505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.572 [2024-12-05 21:24:39.855565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.572 [2024-12-05 21:24:39.855580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.572 [2024-12-05 21:24:39.855588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.572 [2024-12-05 21:24:39.855597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.572 [2024-12-05 21:24:39.855615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.572 qpair failed and we were unable to recover it. 00:31:38.572 [2024-12-05 21:24:39.865544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.572 [2024-12-05 21:24:39.865615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.572 [2024-12-05 21:24:39.865630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.572 [2024-12-05 21:24:39.865637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.572 [2024-12-05 21:24:39.865644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.572 [2024-12-05 21:24:39.865659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.572 qpair failed and we were unable to recover it. 00:31:38.572 [2024-12-05 21:24:39.875589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.572 [2024-12-05 21:24:39.875657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.572 [2024-12-05 21:24:39.875671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.572 [2024-12-05 21:24:39.875679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.572 [2024-12-05 21:24:39.875686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.572 [2024-12-05 21:24:39.875700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.572 qpair failed and we were unable to recover it. 00:31:38.572 [2024-12-05 21:24:39.885512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.572 [2024-12-05 21:24:39.885570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.572 [2024-12-05 21:24:39.885584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.572 [2024-12-05 21:24:39.885592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.572 [2024-12-05 21:24:39.885598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.572 [2024-12-05 21:24:39.885612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.572 qpair failed and we were unable to recover it. 00:31:38.572 [2024-12-05 21:24:39.895615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.572 [2024-12-05 21:24:39.895671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.572 [2024-12-05 21:24:39.895686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.572 [2024-12-05 21:24:39.895693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.572 [2024-12-05 21:24:39.895700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.572 [2024-12-05 21:24:39.895714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.572 qpair failed and we were unable to recover it. 00:31:38.572 [2024-12-05 21:24:39.905648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.572 [2024-12-05 21:24:39.905707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.572 [2024-12-05 21:24:39.905721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.572 [2024-12-05 21:24:39.905728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.572 [2024-12-05 21:24:39.905735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.572 [2024-12-05 21:24:39.905752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.572 qpair failed and we were unable to recover it. 00:31:38.572 [2024-12-05 21:24:39.915669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.572 [2024-12-05 21:24:39.915723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.572 [2024-12-05 21:24:39.915737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.572 [2024-12-05 21:24:39.915745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.572 [2024-12-05 21:24:39.915751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.572 [2024-12-05 21:24:39.915765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.572 qpair failed and we were unable to recover it. 00:31:38.572 [2024-12-05 21:24:39.925689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.572 [2024-12-05 21:24:39.925744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.572 [2024-12-05 21:24:39.925757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.572 [2024-12-05 21:24:39.925765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.572 [2024-12-05 21:24:39.925771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.572 [2024-12-05 21:24:39.925786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.572 qpair failed and we were unable to recover it. 00:31:38.572 [2024-12-05 21:24:39.935716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.572 [2024-12-05 21:24:39.935776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.572 [2024-12-05 21:24:39.935790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.572 [2024-12-05 21:24:39.935801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.572 [2024-12-05 21:24:39.935807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.572 [2024-12-05 21:24:39.935821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.572 qpair failed and we were unable to recover it. 00:31:38.572 [2024-12-05 21:24:39.945728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.572 [2024-12-05 21:24:39.945784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.572 [2024-12-05 21:24:39.945798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.572 [2024-12-05 21:24:39.945805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.573 [2024-12-05 21:24:39.945812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.573 [2024-12-05 21:24:39.945826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.573 qpair failed and we were unable to recover it. 00:31:38.573 [2024-12-05 21:24:39.955783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.573 [2024-12-05 21:24:39.955839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.573 [2024-12-05 21:24:39.955853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.573 [2024-12-05 21:24:39.955865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.573 [2024-12-05 21:24:39.955872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.573 [2024-12-05 21:24:39.955886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.573 qpair failed and we were unable to recover it. 00:31:38.573 [2024-12-05 21:24:39.965822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.573 [2024-12-05 21:24:39.965922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.573 [2024-12-05 21:24:39.965936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.573 [2024-12-05 21:24:39.965944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.573 [2024-12-05 21:24:39.965951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.573 [2024-12-05 21:24:39.965965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.573 qpair failed and we were unable to recover it. 00:31:38.573 [2024-12-05 21:24:39.975839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.573 [2024-12-05 21:24:39.975900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.573 [2024-12-05 21:24:39.975914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.573 [2024-12-05 21:24:39.975922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.573 [2024-12-05 21:24:39.975929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.573 [2024-12-05 21:24:39.975946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.573 qpair failed and we were unable to recover it. 00:31:38.573 [2024-12-05 21:24:39.985854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.573 [2024-12-05 21:24:39.985917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.573 [2024-12-05 21:24:39.985930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.573 [2024-12-05 21:24:39.985938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.573 [2024-12-05 21:24:39.985945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.573 [2024-12-05 21:24:39.985959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.573 qpair failed and we were unable to recover it. 00:31:38.573 [2024-12-05 21:24:39.995876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.573 [2024-12-05 21:24:39.995937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.573 [2024-12-05 21:24:39.995951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.573 [2024-12-05 21:24:39.995958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.573 [2024-12-05 21:24:39.995965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.573 [2024-12-05 21:24:39.995978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.573 qpair failed and we were unable to recover it. 00:31:38.837 [2024-12-05 21:24:40.005941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.837 [2024-12-05 21:24:40.006024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.837 [2024-12-05 21:24:40.006039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.837 [2024-12-05 21:24:40.006047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.837 [2024-12-05 21:24:40.006054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.837 [2024-12-05 21:24:40.006068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.837 qpair failed and we were unable to recover it. 00:31:38.837 [2024-12-05 21:24:40.016021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.837 [2024-12-05 21:24:40.016098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.837 [2024-12-05 21:24:40.016112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.837 [2024-12-05 21:24:40.016119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.837 [2024-12-05 21:24:40.016127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.837 [2024-12-05 21:24:40.016141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.837 qpair failed and we were unable to recover it. 00:31:38.837 [2024-12-05 21:24:40.025970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.837 [2024-12-05 21:24:40.026032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.837 [2024-12-05 21:24:40.026047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.837 [2024-12-05 21:24:40.026055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.837 [2024-12-05 21:24:40.026063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.837 [2024-12-05 21:24:40.026077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.837 qpair failed and we were unable to recover it. 00:31:38.837 [2024-12-05 21:24:40.036018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.837 [2024-12-05 21:24:40.036076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.837 [2024-12-05 21:24:40.036090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.837 [2024-12-05 21:24:40.036098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.837 [2024-12-05 21:24:40.036105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.837 [2024-12-05 21:24:40.036119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.837 qpair failed and we were unable to recover it. 00:31:38.837 [2024-12-05 21:24:40.046089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.837 [2024-12-05 21:24:40.046144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.837 [2024-12-05 21:24:40.046158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.837 [2024-12-05 21:24:40.046166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.837 [2024-12-05 21:24:40.046172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.837 [2024-12-05 21:24:40.046187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.837 qpair failed and we were unable to recover it. 00:31:38.837 [2024-12-05 21:24:40.056098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.837 [2024-12-05 21:24:40.056152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.837 [2024-12-05 21:24:40.056166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.837 [2024-12-05 21:24:40.056174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.837 [2024-12-05 21:24:40.056181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.837 [2024-12-05 21:24:40.056195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.837 qpair failed and we were unable to recover it. 00:31:38.837 [2024-12-05 21:24:40.066107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.837 [2024-12-05 21:24:40.066166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.837 [2024-12-05 21:24:40.066180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.837 [2024-12-05 21:24:40.066195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.837 [2024-12-05 21:24:40.066202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.837 [2024-12-05 21:24:40.066216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.837 qpair failed and we were unable to recover it. 00:31:38.837 [2024-12-05 21:24:40.076162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.837 [2024-12-05 21:24:40.076224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.837 [2024-12-05 21:24:40.076240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.837 [2024-12-05 21:24:40.076247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.837 [2024-12-05 21:24:40.076254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.837 [2024-12-05 21:24:40.076272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.837 qpair failed and we were unable to recover it. 00:31:38.837 [2024-12-05 21:24:40.086096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.837 [2024-12-05 21:24:40.086150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.837 [2024-12-05 21:24:40.086165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.837 [2024-12-05 21:24:40.086172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.837 [2024-12-05 21:24:40.086179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.837 [2024-12-05 21:24:40.086193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.837 qpair failed and we were unable to recover it. 00:31:38.837 [2024-12-05 21:24:40.096200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.837 [2024-12-05 21:24:40.096251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.837 [2024-12-05 21:24:40.096265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.837 [2024-12-05 21:24:40.096272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.837 [2024-12-05 21:24:40.096279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.837 [2024-12-05 21:24:40.096292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.837 qpair failed and we were unable to recover it. 00:31:38.837 [2024-12-05 21:24:40.106244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.837 [2024-12-05 21:24:40.106298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.837 [2024-12-05 21:24:40.106312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.837 [2024-12-05 21:24:40.106319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.837 [2024-12-05 21:24:40.106326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.838 [2024-12-05 21:24:40.106344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.838 qpair failed and we were unable to recover it. 00:31:38.838 [2024-12-05 21:24:40.116128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.838 [2024-12-05 21:24:40.116184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.838 [2024-12-05 21:24:40.116198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.838 [2024-12-05 21:24:40.116206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.838 [2024-12-05 21:24:40.116212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.838 [2024-12-05 21:24:40.116226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.838 qpair failed and we were unable to recover it. 00:31:38.838 [2024-12-05 21:24:40.126268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.838 [2024-12-05 21:24:40.126325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.838 [2024-12-05 21:24:40.126338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.838 [2024-12-05 21:24:40.126346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.838 [2024-12-05 21:24:40.126352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.838 [2024-12-05 21:24:40.126366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.838 qpair failed and we were unable to recover it. 00:31:38.838 [2024-12-05 21:24:40.136304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.838 [2024-12-05 21:24:40.136360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.838 [2024-12-05 21:24:40.136374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.838 [2024-12-05 21:24:40.136382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.838 [2024-12-05 21:24:40.136388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.838 [2024-12-05 21:24:40.136402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.838 qpair failed and we were unable to recover it. 00:31:38.838 [2024-12-05 21:24:40.146324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.838 [2024-12-05 21:24:40.146384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.838 [2024-12-05 21:24:40.146398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.838 [2024-12-05 21:24:40.146405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.838 [2024-12-05 21:24:40.146412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.838 [2024-12-05 21:24:40.146426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.838 qpair failed and we were unable to recover it. 00:31:38.838 [2024-12-05 21:24:40.156360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.838 [2024-12-05 21:24:40.156417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.838 [2024-12-05 21:24:40.156431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.838 [2024-12-05 21:24:40.156438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.838 [2024-12-05 21:24:40.156445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.838 [2024-12-05 21:24:40.156459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.838 qpair failed and we were unable to recover it. 00:31:38.838 [2024-12-05 21:24:40.166371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.838 [2024-12-05 21:24:40.166432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.838 [2024-12-05 21:24:40.166446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.838 [2024-12-05 21:24:40.166453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.838 [2024-12-05 21:24:40.166460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.838 [2024-12-05 21:24:40.166474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.838 qpair failed and we were unable to recover it. 00:31:38.838 [2024-12-05 21:24:40.176407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.838 [2024-12-05 21:24:40.176461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.838 [2024-12-05 21:24:40.176475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.838 [2024-12-05 21:24:40.176482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.838 [2024-12-05 21:24:40.176489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.838 [2024-12-05 21:24:40.176503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.838 qpair failed and we were unable to recover it. 00:31:38.838 [2024-12-05 21:24:40.186363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.838 [2024-12-05 21:24:40.186436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.838 [2024-12-05 21:24:40.186452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.838 [2024-12-05 21:24:40.186459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.838 [2024-12-05 21:24:40.186466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.838 [2024-12-05 21:24:40.186480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.838 qpair failed and we were unable to recover it. 00:31:38.838 [2024-12-05 21:24:40.196445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.838 [2024-12-05 21:24:40.196522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.838 [2024-12-05 21:24:40.196536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.838 [2024-12-05 21:24:40.196546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.838 [2024-12-05 21:24:40.196554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.838 [2024-12-05 21:24:40.196567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.838 qpair failed and we were unable to recover it. 00:31:38.838 [2024-12-05 21:24:40.206473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.838 [2024-12-05 21:24:40.206525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.838 [2024-12-05 21:24:40.206539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.838 [2024-12-05 21:24:40.206546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.838 [2024-12-05 21:24:40.206553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.838 [2024-12-05 21:24:40.206567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.838 qpair failed and we were unable to recover it. 00:31:38.838 [2024-12-05 21:24:40.216472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.838 [2024-12-05 21:24:40.216533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.838 [2024-12-05 21:24:40.216547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.838 [2024-12-05 21:24:40.216555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.838 [2024-12-05 21:24:40.216562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.838 [2024-12-05 21:24:40.216576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.838 qpair failed and we were unable to recover it. 00:31:38.838 [2024-12-05 21:24:40.226552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.838 [2024-12-05 21:24:40.226611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.838 [2024-12-05 21:24:40.226624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.838 [2024-12-05 21:24:40.226631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.838 [2024-12-05 21:24:40.226638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.838 [2024-12-05 21:24:40.226651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.838 qpair failed and we were unable to recover it. 00:31:38.838 [2024-12-05 21:24:40.236571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.838 [2024-12-05 21:24:40.236625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.838 [2024-12-05 21:24:40.236638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.838 [2024-12-05 21:24:40.236646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.838 [2024-12-05 21:24:40.236652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.839 [2024-12-05 21:24:40.236670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.839 qpair failed and we were unable to recover it. 00:31:38.839 [2024-12-05 21:24:40.246596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.839 [2024-12-05 21:24:40.246652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.839 [2024-12-05 21:24:40.246678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.839 [2024-12-05 21:24:40.246687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.839 [2024-12-05 21:24:40.246694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.839 [2024-12-05 21:24:40.246716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.839 qpair failed and we were unable to recover it. 00:31:38.839 [2024-12-05 21:24:40.256619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.839 [2024-12-05 21:24:40.256680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.839 [2024-12-05 21:24:40.256706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.839 [2024-12-05 21:24:40.256715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.839 [2024-12-05 21:24:40.256723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.839 [2024-12-05 21:24:40.256744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.839 qpair failed and we were unable to recover it. 00:31:38.839 [2024-12-05 21:24:40.266645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.839 [2024-12-05 21:24:40.266702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.839 [2024-12-05 21:24:40.266718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.839 [2024-12-05 21:24:40.266726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.839 [2024-12-05 21:24:40.266733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:38.839 [2024-12-05 21:24:40.266749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.839 qpair failed and we were unable to recover it. 00:31:39.103 [2024-12-05 21:24:40.276691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.103 [2024-12-05 21:24:40.276785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.103 [2024-12-05 21:24:40.276801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.103 [2024-12-05 21:24:40.276810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.103 [2024-12-05 21:24:40.276820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.103 [2024-12-05 21:24:40.276837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.103 qpair failed and we were unable to recover it. 00:31:39.103 [2024-12-05 21:24:40.286725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.103 [2024-12-05 21:24:40.286828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.103 [2024-12-05 21:24:40.286843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.103 [2024-12-05 21:24:40.286851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.103 [2024-12-05 21:24:40.286858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.103 [2024-12-05 21:24:40.286877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.103 qpair failed and we were unable to recover it. 00:31:39.103 [2024-12-05 21:24:40.296720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.103 [2024-12-05 21:24:40.296780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.103 [2024-12-05 21:24:40.296794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.103 [2024-12-05 21:24:40.296802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.103 [2024-12-05 21:24:40.296809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.103 [2024-12-05 21:24:40.296823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.103 qpair failed and we were unable to recover it. 00:31:39.103 [2024-12-05 21:24:40.306650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.103 [2024-12-05 21:24:40.306715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.103 [2024-12-05 21:24:40.306728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.103 [2024-12-05 21:24:40.306736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.103 [2024-12-05 21:24:40.306742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.103 [2024-12-05 21:24:40.306756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.103 qpair failed and we were unable to recover it. 00:31:39.103 [2024-12-05 21:24:40.316802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.103 [2024-12-05 21:24:40.316857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.103 [2024-12-05 21:24:40.316875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.103 [2024-12-05 21:24:40.316882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.103 [2024-12-05 21:24:40.316889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.103 [2024-12-05 21:24:40.316903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.103 qpair failed and we were unable to recover it. 00:31:39.103 [2024-12-05 21:24:40.326856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.103 [2024-12-05 21:24:40.326931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.103 [2024-12-05 21:24:40.326945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.103 [2024-12-05 21:24:40.326956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.103 [2024-12-05 21:24:40.326963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.103 [2024-12-05 21:24:40.326977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.103 qpair failed and we were unable to recover it. 00:31:39.103 [2024-12-05 21:24:40.336859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.103 [2024-12-05 21:24:40.336950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.103 [2024-12-05 21:24:40.336964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.103 [2024-12-05 21:24:40.336972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.103 [2024-12-05 21:24:40.336979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.103 [2024-12-05 21:24:40.336994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.103 qpair failed and we were unable to recover it. 00:31:39.103 [2024-12-05 21:24:40.346922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.103 [2024-12-05 21:24:40.346980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.103 [2024-12-05 21:24:40.346994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.103 [2024-12-05 21:24:40.347001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.103 [2024-12-05 21:24:40.347008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.103 [2024-12-05 21:24:40.347022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.103 qpair failed and we were unable to recover it. 00:31:39.103 [2024-12-05 21:24:40.356916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.103 [2024-12-05 21:24:40.356975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.103 [2024-12-05 21:24:40.356989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.103 [2024-12-05 21:24:40.356997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.103 [2024-12-05 21:24:40.357004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.103 [2024-12-05 21:24:40.357017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.103 qpair failed and we were unable to recover it. 00:31:39.103 [2024-12-05 21:24:40.366937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.103 [2024-12-05 21:24:40.366992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.103 [2024-12-05 21:24:40.367006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.103 [2024-12-05 21:24:40.367013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.103 [2024-12-05 21:24:40.367020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.103 [2024-12-05 21:24:40.367038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.103 qpair failed and we were unable to recover it. 00:31:39.103 [2024-12-05 21:24:40.376822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.103 [2024-12-05 21:24:40.376880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.103 [2024-12-05 21:24:40.376895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.103 [2024-12-05 21:24:40.376902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.103 [2024-12-05 21:24:40.376908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.103 [2024-12-05 21:24:40.376923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.103 qpair failed and we were unable to recover it. 00:31:39.103 [2024-12-05 21:24:40.386987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.103 [2024-12-05 21:24:40.387047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.103 [2024-12-05 21:24:40.387061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.103 [2024-12-05 21:24:40.387069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.103 [2024-12-05 21:24:40.387076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.103 [2024-12-05 21:24:40.387090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.103 qpair failed and we were unable to recover it. 00:31:39.103 [2024-12-05 21:24:40.397043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.103 [2024-12-05 21:24:40.397101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.103 [2024-12-05 21:24:40.397115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.104 [2024-12-05 21:24:40.397122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.104 [2024-12-05 21:24:40.397129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.104 [2024-12-05 21:24:40.397143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.104 qpair failed and we were unable to recover it. 00:31:39.104 [2024-12-05 21:24:40.406958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.104 [2024-12-05 21:24:40.407013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.104 [2024-12-05 21:24:40.407026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.104 [2024-12-05 21:24:40.407034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.104 [2024-12-05 21:24:40.407041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.104 [2024-12-05 21:24:40.407054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.104 qpair failed and we were unable to recover it. 00:31:39.104 [2024-12-05 21:24:40.417077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.104 [2024-12-05 21:24:40.417137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.104 [2024-12-05 21:24:40.417151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.104 [2024-12-05 21:24:40.417158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.104 [2024-12-05 21:24:40.417165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.104 [2024-12-05 21:24:40.417179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.104 qpair failed and we were unable to recover it. 00:31:39.104 [2024-12-05 21:24:40.427114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.104 [2024-12-05 21:24:40.427168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.104 [2024-12-05 21:24:40.427182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.104 [2024-12-05 21:24:40.427189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.104 [2024-12-05 21:24:40.427195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.104 [2024-12-05 21:24:40.427209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.104 qpair failed and we were unable to recover it. 00:31:39.104 [2024-12-05 21:24:40.437051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.104 [2024-12-05 21:24:40.437106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.104 [2024-12-05 21:24:40.437120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.104 [2024-12-05 21:24:40.437128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.104 [2024-12-05 21:24:40.437135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.104 [2024-12-05 21:24:40.437149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.104 qpair failed and we were unable to recover it. 00:31:39.104 [2024-12-05 21:24:40.447128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.104 [2024-12-05 21:24:40.447184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.104 [2024-12-05 21:24:40.447198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.104 [2024-12-05 21:24:40.447205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.104 [2024-12-05 21:24:40.447211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.104 [2024-12-05 21:24:40.447225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.104 qpair failed and we were unable to recover it. 00:31:39.104 [2024-12-05 21:24:40.457168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.104 [2024-12-05 21:24:40.457226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.104 [2024-12-05 21:24:40.457239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.104 [2024-12-05 21:24:40.457250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.104 [2024-12-05 21:24:40.457257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.104 [2024-12-05 21:24:40.457271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.104 qpair failed and we were unable to recover it. 00:31:39.104 [2024-12-05 21:24:40.467196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.104 [2024-12-05 21:24:40.467254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.104 [2024-12-05 21:24:40.467267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.104 [2024-12-05 21:24:40.467274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.104 [2024-12-05 21:24:40.467281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.104 [2024-12-05 21:24:40.467294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.104 qpair failed and we were unable to recover it. 00:31:39.104 [2024-12-05 21:24:40.477263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.104 [2024-12-05 21:24:40.477322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.104 [2024-12-05 21:24:40.477336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.104 [2024-12-05 21:24:40.477343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.104 [2024-12-05 21:24:40.477350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.104 [2024-12-05 21:24:40.477364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.104 qpair failed and we were unable to recover it. 00:31:39.104 [2024-12-05 21:24:40.487280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.104 [2024-12-05 21:24:40.487332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.104 [2024-12-05 21:24:40.487346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.104 [2024-12-05 21:24:40.487353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.104 [2024-12-05 21:24:40.487360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.104 [2024-12-05 21:24:40.487374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.104 qpair failed and we were unable to recover it. 00:31:39.104 [2024-12-05 21:24:40.497294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.104 [2024-12-05 21:24:40.497347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.104 [2024-12-05 21:24:40.497360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.104 [2024-12-05 21:24:40.497367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.104 [2024-12-05 21:24:40.497374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.104 [2024-12-05 21:24:40.497391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.104 qpair failed and we were unable to recover it. 00:31:39.104 [2024-12-05 21:24:40.507326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.104 [2024-12-05 21:24:40.507381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.104 [2024-12-05 21:24:40.507395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.104 [2024-12-05 21:24:40.507402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.104 [2024-12-05 21:24:40.507408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.104 [2024-12-05 21:24:40.507422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.104 qpair failed and we were unable to recover it. 00:31:39.104 [2024-12-05 21:24:40.517375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.104 [2024-12-05 21:24:40.517432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.104 [2024-12-05 21:24:40.517446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.104 [2024-12-05 21:24:40.517453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.104 [2024-12-05 21:24:40.517460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.104 [2024-12-05 21:24:40.517474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.104 qpair failed and we were unable to recover it. 00:31:39.104 [2024-12-05 21:24:40.527441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.104 [2024-12-05 21:24:40.527495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.104 [2024-12-05 21:24:40.527508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.104 [2024-12-05 21:24:40.527516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.105 [2024-12-05 21:24:40.527523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.105 [2024-12-05 21:24:40.527537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.105 qpair failed and we were unable to recover it. 00:31:39.368 [2024-12-05 21:24:40.537430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.368 [2024-12-05 21:24:40.537489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.368 [2024-12-05 21:24:40.537504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.368 [2024-12-05 21:24:40.537511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.368 [2024-12-05 21:24:40.537518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.368 [2024-12-05 21:24:40.537533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.368 qpair failed and we were unable to recover it. 00:31:39.368 [2024-12-05 21:24:40.547495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.368 [2024-12-05 21:24:40.547555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.368 [2024-12-05 21:24:40.547570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.368 [2024-12-05 21:24:40.547578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.368 [2024-12-05 21:24:40.547584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.368 [2024-12-05 21:24:40.547598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.368 qpair failed and we were unable to recover it. 00:31:39.368 [2024-12-05 21:24:40.557522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.368 [2024-12-05 21:24:40.557575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.368 [2024-12-05 21:24:40.557591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.368 [2024-12-05 21:24:40.557599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.368 [2024-12-05 21:24:40.557605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.368 [2024-12-05 21:24:40.557620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.368 qpair failed and we were unable to recover it. 00:31:39.368 [2024-12-05 21:24:40.567415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.368 [2024-12-05 21:24:40.567469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.368 [2024-12-05 21:24:40.567482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.368 [2024-12-05 21:24:40.567489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.368 [2024-12-05 21:24:40.567496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.368 [2024-12-05 21:24:40.567510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.368 qpair failed and we were unable to recover it. 00:31:39.368 [2024-12-05 21:24:40.577475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.368 [2024-12-05 21:24:40.577529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.368 [2024-12-05 21:24:40.577543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.368 [2024-12-05 21:24:40.577551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.368 [2024-12-05 21:24:40.577557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.368 [2024-12-05 21:24:40.577571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.368 qpair failed and we were unable to recover it. 00:31:39.368 [2024-12-05 21:24:40.587557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.368 [2024-12-05 21:24:40.587632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.368 [2024-12-05 21:24:40.587658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.368 [2024-12-05 21:24:40.587671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.368 [2024-12-05 21:24:40.587678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.368 [2024-12-05 21:24:40.587698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.368 qpair failed and we were unable to recover it. 00:31:39.368 [2024-12-05 21:24:40.597592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.368 [2024-12-05 21:24:40.597651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.368 [2024-12-05 21:24:40.597675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.368 [2024-12-05 21:24:40.597684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.368 [2024-12-05 21:24:40.597692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.368 [2024-12-05 21:24:40.597711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.368 qpair failed and we were unable to recover it. 00:31:39.368 [2024-12-05 21:24:40.607577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.368 [2024-12-05 21:24:40.607643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.368 [2024-12-05 21:24:40.607668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.368 [2024-12-05 21:24:40.607677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.368 [2024-12-05 21:24:40.607684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.368 [2024-12-05 21:24:40.607705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.368 qpair failed and we were unable to recover it. 00:31:39.368 [2024-12-05 21:24:40.617587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.368 [2024-12-05 21:24:40.617640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.368 [2024-12-05 21:24:40.617656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.368 [2024-12-05 21:24:40.617663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.368 [2024-12-05 21:24:40.617670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.368 [2024-12-05 21:24:40.617685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.368 qpair failed and we were unable to recover it. 00:31:39.368 [2024-12-05 21:24:40.627679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.368 [2024-12-05 21:24:40.627749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.368 [2024-12-05 21:24:40.627763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.368 [2024-12-05 21:24:40.627771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.368 [2024-12-05 21:24:40.627777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.368 [2024-12-05 21:24:40.627796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.368 qpair failed and we were unable to recover it. 00:31:39.368 [2024-12-05 21:24:40.637587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.368 [2024-12-05 21:24:40.637653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.368 [2024-12-05 21:24:40.637678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.368 [2024-12-05 21:24:40.637688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.368 [2024-12-05 21:24:40.637695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.368 [2024-12-05 21:24:40.637715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.368 qpair failed and we were unable to recover it. 00:31:39.368 [2024-12-05 21:24:40.647764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.368 [2024-12-05 21:24:40.647827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.369 [2024-12-05 21:24:40.647843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.369 [2024-12-05 21:24:40.647851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.369 [2024-12-05 21:24:40.647858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.369 [2024-12-05 21:24:40.647879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.369 qpair failed and we were unable to recover it. 00:31:39.369 [2024-12-05 21:24:40.657661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.369 [2024-12-05 21:24:40.657711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.369 [2024-12-05 21:24:40.657725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.369 [2024-12-05 21:24:40.657732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.369 [2024-12-05 21:24:40.657739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.369 [2024-12-05 21:24:40.657754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.369 qpair failed and we were unable to recover it. 00:31:39.369 [2024-12-05 21:24:40.667757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.369 [2024-12-05 21:24:40.667830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.369 [2024-12-05 21:24:40.667846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.369 [2024-12-05 21:24:40.667854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.369 [2024-12-05 21:24:40.667860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.369 [2024-12-05 21:24:40.667880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.369 qpair failed and we were unable to recover it. 00:31:39.369 [2024-12-05 21:24:40.677826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.369 [2024-12-05 21:24:40.677884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.369 [2024-12-05 21:24:40.677898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.369 [2024-12-05 21:24:40.677906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.369 [2024-12-05 21:24:40.677912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.369 [2024-12-05 21:24:40.677926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.369 qpair failed and we were unable to recover it. 00:31:39.369 [2024-12-05 21:24:40.687838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.369 [2024-12-05 21:24:40.687896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.369 [2024-12-05 21:24:40.687909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.369 [2024-12-05 21:24:40.687917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.369 [2024-12-05 21:24:40.687925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.369 [2024-12-05 21:24:40.687939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.369 qpair failed and we were unable to recover it. 00:31:39.369 [2024-12-05 21:24:40.697815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.369 [2024-12-05 21:24:40.697875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.369 [2024-12-05 21:24:40.697889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.369 [2024-12-05 21:24:40.697897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.369 [2024-12-05 21:24:40.697904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.369 [2024-12-05 21:24:40.697918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.369 qpair failed and we were unable to recover it. 00:31:39.369 [2024-12-05 21:24:40.707921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.369 [2024-12-05 21:24:40.707987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.369 [2024-12-05 21:24:40.708001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.369 [2024-12-05 21:24:40.708008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.369 [2024-12-05 21:24:40.708015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.369 [2024-12-05 21:24:40.708030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.369 qpair failed and we were unable to recover it. 00:31:39.369 [2024-12-05 21:24:40.717904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.369 [2024-12-05 21:24:40.717965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.369 [2024-12-05 21:24:40.717978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.369 [2024-12-05 21:24:40.717990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.369 [2024-12-05 21:24:40.717997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.369 [2024-12-05 21:24:40.718011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.369 qpair failed and we were unable to recover it. 00:31:39.369 [2024-12-05 21:24:40.727933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.369 [2024-12-05 21:24:40.727989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.369 [2024-12-05 21:24:40.728003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.369 [2024-12-05 21:24:40.728010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.369 [2024-12-05 21:24:40.728017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.369 [2024-12-05 21:24:40.728031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.369 qpair failed and we were unable to recover it. 00:31:39.369 [2024-12-05 21:24:40.737807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.369 [2024-12-05 21:24:40.737856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.369 [2024-12-05 21:24:40.737874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.369 [2024-12-05 21:24:40.737881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.369 [2024-12-05 21:24:40.737888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.369 [2024-12-05 21:24:40.737902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.369 qpair failed and we were unable to recover it. 00:31:39.369 [2024-12-05 21:24:40.748018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.369 [2024-12-05 21:24:40.748074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.369 [2024-12-05 21:24:40.748088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.369 [2024-12-05 21:24:40.748095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.369 [2024-12-05 21:24:40.748101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.369 [2024-12-05 21:24:40.748115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.369 qpair failed and we were unable to recover it. 00:31:39.369 [2024-12-05 21:24:40.758050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.369 [2024-12-05 21:24:40.758107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.369 [2024-12-05 21:24:40.758121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.369 [2024-12-05 21:24:40.758128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.369 [2024-12-05 21:24:40.758135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.369 [2024-12-05 21:24:40.758153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.369 qpair failed and we were unable to recover it. 00:31:39.369 [2024-12-05 21:24:40.768076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.369 [2024-12-05 21:24:40.768132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.369 [2024-12-05 21:24:40.768146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.369 [2024-12-05 21:24:40.768153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.369 [2024-12-05 21:24:40.768160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.369 [2024-12-05 21:24:40.768174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.369 qpair failed and we were unable to recover it. 00:31:39.369 [2024-12-05 21:24:40.778057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.369 [2024-12-05 21:24:40.778100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.369 [2024-12-05 21:24:40.778114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.369 [2024-12-05 21:24:40.778122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.370 [2024-12-05 21:24:40.778129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.370 [2024-12-05 21:24:40.778143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.370 qpair failed and we were unable to recover it. 00:31:39.370 [2024-12-05 21:24:40.788145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.370 [2024-12-05 21:24:40.788229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.370 [2024-12-05 21:24:40.788243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.370 [2024-12-05 21:24:40.788250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.370 [2024-12-05 21:24:40.788256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.370 [2024-12-05 21:24:40.788271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.370 qpair failed and we were unable to recover it. 00:31:39.370 [2024-12-05 21:24:40.798129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.370 [2024-12-05 21:24:40.798182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.370 [2024-12-05 21:24:40.798196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.370 [2024-12-05 21:24:40.798203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.370 [2024-12-05 21:24:40.798209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.370 [2024-12-05 21:24:40.798223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.370 qpair failed and we were unable to recover it. 00:31:39.633 [2024-12-05 21:24:40.808185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.633 [2024-12-05 21:24:40.808245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.633 [2024-12-05 21:24:40.808259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.633 [2024-12-05 21:24:40.808266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.633 [2024-12-05 21:24:40.808273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.633 [2024-12-05 21:24:40.808286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.633 qpair failed and we were unable to recover it. 00:31:39.633 [2024-12-05 21:24:40.818169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.633 [2024-12-05 21:24:40.818224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.633 [2024-12-05 21:24:40.818238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.633 [2024-12-05 21:24:40.818246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.633 [2024-12-05 21:24:40.818252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.633 [2024-12-05 21:24:40.818266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.633 qpair failed and we were unable to recover it. 00:31:39.633 [2024-12-05 21:24:40.828119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.633 [2024-12-05 21:24:40.828172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.633 [2024-12-05 21:24:40.828186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.633 [2024-12-05 21:24:40.828193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.633 [2024-12-05 21:24:40.828200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.633 [2024-12-05 21:24:40.828213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.633 qpair failed and we were unable to recover it. 00:31:39.633 [2024-12-05 21:24:40.838271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.633 [2024-12-05 21:24:40.838344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.633 [2024-12-05 21:24:40.838357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.633 [2024-12-05 21:24:40.838364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.633 [2024-12-05 21:24:40.838371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.633 [2024-12-05 21:24:40.838384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.633 qpair failed and we were unable to recover it. 00:31:39.633 [2024-12-05 21:24:40.848296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.633 [2024-12-05 21:24:40.848349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.633 [2024-12-05 21:24:40.848362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.633 [2024-12-05 21:24:40.848373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.633 [2024-12-05 21:24:40.848380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.633 [2024-12-05 21:24:40.848393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.633 qpair failed and we were unable to recover it. 00:31:39.633 [2024-12-05 21:24:40.858271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.633 [2024-12-05 21:24:40.858335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.633 [2024-12-05 21:24:40.858349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.633 [2024-12-05 21:24:40.858357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.633 [2024-12-05 21:24:40.858364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.633 [2024-12-05 21:24:40.858377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.633 qpair failed and we were unable to recover it. 00:31:39.633 [2024-12-05 21:24:40.868343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.633 [2024-12-05 21:24:40.868401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.633 [2024-12-05 21:24:40.868415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.633 [2024-12-05 21:24:40.868422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.633 [2024-12-05 21:24:40.868429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.633 [2024-12-05 21:24:40.868442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.633 qpair failed and we were unable to recover it. 00:31:39.633 [2024-12-05 21:24:40.878348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.633 [2024-12-05 21:24:40.878404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.633 [2024-12-05 21:24:40.878417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.633 [2024-12-05 21:24:40.878425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.633 [2024-12-05 21:24:40.878432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.633 [2024-12-05 21:24:40.878445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.633 qpair failed and we were unable to recover it. 00:31:39.633 [2024-12-05 21:24:40.888378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.633 [2024-12-05 21:24:40.888428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.633 [2024-12-05 21:24:40.888441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.633 [2024-12-05 21:24:40.888448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.633 [2024-12-05 21:24:40.888455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.633 [2024-12-05 21:24:40.888473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.633 qpair failed and we were unable to recover it. 00:31:39.633 [2024-12-05 21:24:40.898340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.633 [2024-12-05 21:24:40.898388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.633 [2024-12-05 21:24:40.898401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.633 [2024-12-05 21:24:40.898409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.633 [2024-12-05 21:24:40.898415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.633 [2024-12-05 21:24:40.898429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.633 qpair failed and we were unable to recover it. 00:31:39.633 [2024-12-05 21:24:40.908324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.633 [2024-12-05 21:24:40.908389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.633 [2024-12-05 21:24:40.908403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.633 [2024-12-05 21:24:40.908411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.633 [2024-12-05 21:24:40.908417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.633 [2024-12-05 21:24:40.908431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.633 qpair failed and we were unable to recover it. 00:31:39.633 [2024-12-05 21:24:40.918344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.633 [2024-12-05 21:24:40.918396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.634 [2024-12-05 21:24:40.918410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.634 [2024-12-05 21:24:40.918417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.634 [2024-12-05 21:24:40.918423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.634 [2024-12-05 21:24:40.918437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.634 qpair failed and we were unable to recover it. 00:31:39.634 [2024-12-05 21:24:40.928511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.634 [2024-12-05 21:24:40.928568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.634 [2024-12-05 21:24:40.928581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.634 [2024-12-05 21:24:40.928588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.634 [2024-12-05 21:24:40.928595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.634 [2024-12-05 21:24:40.928609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.634 qpair failed and we were unable to recover it. 00:31:39.634 [2024-12-05 21:24:40.938490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.634 [2024-12-05 21:24:40.938540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.634 [2024-12-05 21:24:40.938554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.634 [2024-12-05 21:24:40.938562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.634 [2024-12-05 21:24:40.938568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.634 [2024-12-05 21:24:40.938582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.634 qpair failed and we were unable to recover it. 00:31:39.634 [2024-12-05 21:24:40.948569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.634 [2024-12-05 21:24:40.948626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.634 [2024-12-05 21:24:40.948639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.634 [2024-12-05 21:24:40.948646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.634 [2024-12-05 21:24:40.948653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.634 [2024-12-05 21:24:40.948667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.634 qpair failed and we were unable to recover it. 00:31:39.634 [2024-12-05 21:24:40.958524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.634 [2024-12-05 21:24:40.958576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.634 [2024-12-05 21:24:40.958590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.634 [2024-12-05 21:24:40.958597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.634 [2024-12-05 21:24:40.958604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.634 [2024-12-05 21:24:40.958617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.634 qpair failed and we were unable to recover it. 00:31:39.634 [2024-12-05 21:24:40.968605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.634 [2024-12-05 21:24:40.968657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.634 [2024-12-05 21:24:40.968671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.634 [2024-12-05 21:24:40.968678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.634 [2024-12-05 21:24:40.968684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.634 [2024-12-05 21:24:40.968698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.634 qpair failed and we were unable to recover it. 00:31:39.634 [2024-12-05 21:24:40.978600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.634 [2024-12-05 21:24:40.978656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.634 [2024-12-05 21:24:40.978682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.634 [2024-12-05 21:24:40.978695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.634 [2024-12-05 21:24:40.978703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.634 [2024-12-05 21:24:40.978722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.634 qpair failed and we were unable to recover it. 00:31:39.634 [2024-12-05 21:24:40.988553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.634 [2024-12-05 21:24:40.988608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.634 [2024-12-05 21:24:40.988624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.634 [2024-12-05 21:24:40.988632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.634 [2024-12-05 21:24:40.988639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.634 [2024-12-05 21:24:40.988654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.634 qpair failed and we were unable to recover it. 00:31:39.634 [2024-12-05 21:24:40.998673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.634 [2024-12-05 21:24:40.998728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.634 [2024-12-05 21:24:40.998742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.634 [2024-12-05 21:24:40.998749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.634 [2024-12-05 21:24:40.998756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.634 [2024-12-05 21:24:40.998770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.634 qpair failed and we were unable to recover it. 00:31:39.634 [2024-12-05 21:24:41.008756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.634 [2024-12-05 21:24:41.008821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.634 [2024-12-05 21:24:41.008834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.634 [2024-12-05 21:24:41.008842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.634 [2024-12-05 21:24:41.008849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.634 [2024-12-05 21:24:41.008868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.634 qpair failed and we were unable to recover it. 00:31:39.634 [2024-12-05 21:24:41.018700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.634 [2024-12-05 21:24:41.018781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.634 [2024-12-05 21:24:41.018795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.634 [2024-12-05 21:24:41.018802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.634 [2024-12-05 21:24:41.018809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.634 [2024-12-05 21:24:41.018831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.634 qpair failed and we were unable to recover it. 00:31:39.634 [2024-12-05 21:24:41.028783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.634 [2024-12-05 21:24:41.028838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.634 [2024-12-05 21:24:41.028852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.634 [2024-12-05 21:24:41.028860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.634 [2024-12-05 21:24:41.028872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.634 [2024-12-05 21:24:41.028887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.634 qpair failed and we were unable to recover it. 00:31:39.634 [2024-12-05 21:24:41.038646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.634 [2024-12-05 21:24:41.038698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.634 [2024-12-05 21:24:41.038712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.634 [2024-12-05 21:24:41.038720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.634 [2024-12-05 21:24:41.038726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.634 [2024-12-05 21:24:41.038741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.634 qpair failed and we were unable to recover it. 00:31:39.634 [2024-12-05 21:24:41.048799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.634 [2024-12-05 21:24:41.048852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.635 [2024-12-05 21:24:41.048869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.635 [2024-12-05 21:24:41.048877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.635 [2024-12-05 21:24:41.048883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.635 [2024-12-05 21:24:41.048897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.635 qpair failed and we were unable to recover it. 00:31:39.635 [2024-12-05 21:24:41.058733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.635 [2024-12-05 21:24:41.058785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.635 [2024-12-05 21:24:41.058799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.635 [2024-12-05 21:24:41.058806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.635 [2024-12-05 21:24:41.058812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.635 [2024-12-05 21:24:41.058826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.635 qpair failed and we were unable to recover it. 00:31:39.897 [2024-12-05 21:24:41.068871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.897 [2024-12-05 21:24:41.068940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.897 [2024-12-05 21:24:41.068954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.897 [2024-12-05 21:24:41.068961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.897 [2024-12-05 21:24:41.068968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.897 [2024-12-05 21:24:41.068982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.897 qpair failed and we were unable to recover it. 00:31:39.897 [2024-12-05 21:24:41.078885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.897 [2024-12-05 21:24:41.078939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.897 [2024-12-05 21:24:41.078953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.897 [2024-12-05 21:24:41.078960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.897 [2024-12-05 21:24:41.078966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.897 [2024-12-05 21:24:41.078981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.897 qpair failed and we were unable to recover it. 00:31:39.897 [2024-12-05 21:24:41.088937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.897 [2024-12-05 21:24:41.089036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.897 [2024-12-05 21:24:41.089051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.897 [2024-12-05 21:24:41.089058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.897 [2024-12-05 21:24:41.089065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.897 [2024-12-05 21:24:41.089079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.897 qpair failed and we were unable to recover it. 00:31:39.897 [2024-12-05 21:24:41.098786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.897 [2024-12-05 21:24:41.098838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.897 [2024-12-05 21:24:41.098851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.897 [2024-12-05 21:24:41.098859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.897 [2024-12-05 21:24:41.098870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.897 [2024-12-05 21:24:41.098885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.897 qpair failed and we were unable to recover it. 00:31:39.897 [2024-12-05 21:24:41.108982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.897 [2024-12-05 21:24:41.109042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.897 [2024-12-05 21:24:41.109056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.897 [2024-12-05 21:24:41.109066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.897 [2024-12-05 21:24:41.109073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.897 [2024-12-05 21:24:41.109087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.897 qpair failed and we were unable to recover it. 00:31:39.897 [2024-12-05 21:24:41.118974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.897 [2024-12-05 21:24:41.119026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.897 [2024-12-05 21:24:41.119040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.897 [2024-12-05 21:24:41.119047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.897 [2024-12-05 21:24:41.119053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.897 [2024-12-05 21:24:41.119067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.897 qpair failed and we were unable to recover it. 00:31:39.897 [2024-12-05 21:24:41.129024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.897 [2024-12-05 21:24:41.129078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.897 [2024-12-05 21:24:41.129091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.897 [2024-12-05 21:24:41.129099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.897 [2024-12-05 21:24:41.129105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.897 [2024-12-05 21:24:41.129119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.897 qpair failed and we were unable to recover it. 00:31:39.897 [2024-12-05 21:24:41.139012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.897 [2024-12-05 21:24:41.139062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.897 [2024-12-05 21:24:41.139076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.897 [2024-12-05 21:24:41.139083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.897 [2024-12-05 21:24:41.139090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.897 [2024-12-05 21:24:41.139104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.897 qpair failed and we were unable to recover it. 00:31:39.897 [2024-12-05 21:24:41.149094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.897 [2024-12-05 21:24:41.149148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.897 [2024-12-05 21:24:41.149162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.897 [2024-12-05 21:24:41.149169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.897 [2024-12-05 21:24:41.149176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.897 [2024-12-05 21:24:41.149193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.897 qpair failed and we were unable to recover it. 00:31:39.897 [2024-12-05 21:24:41.159110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.897 [2024-12-05 21:24:41.159162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.897 [2024-12-05 21:24:41.159177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.897 [2024-12-05 21:24:41.159184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.897 [2024-12-05 21:24:41.159191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.897 [2024-12-05 21:24:41.159205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.897 qpair failed and we were unable to recover it. 00:31:39.897 [2024-12-05 21:24:41.169155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.897 [2024-12-05 21:24:41.169204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.897 [2024-12-05 21:24:41.169218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.897 [2024-12-05 21:24:41.169225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.897 [2024-12-05 21:24:41.169232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.897 [2024-12-05 21:24:41.169245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.897 qpair failed and we were unable to recover it. 00:31:39.897 [2024-12-05 21:24:41.179049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.897 [2024-12-05 21:24:41.179110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.898 [2024-12-05 21:24:41.179124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.898 [2024-12-05 21:24:41.179132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.898 [2024-12-05 21:24:41.179139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.898 [2024-12-05 21:24:41.179153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.898 qpair failed and we were unable to recover it. 00:31:39.898 [2024-12-05 21:24:41.189208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.898 [2024-12-05 21:24:41.189265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.898 [2024-12-05 21:24:41.189279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.898 [2024-12-05 21:24:41.189286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.898 [2024-12-05 21:24:41.189293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.898 [2024-12-05 21:24:41.189307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.898 qpair failed and we were unable to recover it. 00:31:39.898 [2024-12-05 21:24:41.199206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.898 [2024-12-05 21:24:41.199265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.898 [2024-12-05 21:24:41.199280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.898 [2024-12-05 21:24:41.199287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.898 [2024-12-05 21:24:41.199294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.898 [2024-12-05 21:24:41.199308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.898 qpair failed and we were unable to recover it. 00:31:39.898 [2024-12-05 21:24:41.209217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.898 [2024-12-05 21:24:41.209273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.898 [2024-12-05 21:24:41.209287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.898 [2024-12-05 21:24:41.209294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.898 [2024-12-05 21:24:41.209301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.898 [2024-12-05 21:24:41.209315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.898 qpair failed and we were unable to recover it. 00:31:39.898 [2024-12-05 21:24:41.219238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.898 [2024-12-05 21:24:41.219282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.898 [2024-12-05 21:24:41.219295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.898 [2024-12-05 21:24:41.219303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.898 [2024-12-05 21:24:41.219309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.898 [2024-12-05 21:24:41.219323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.898 qpair failed and we were unable to recover it. 00:31:39.898 [2024-12-05 21:24:41.229301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.898 [2024-12-05 21:24:41.229354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.898 [2024-12-05 21:24:41.229368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.898 [2024-12-05 21:24:41.229375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.898 [2024-12-05 21:24:41.229382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.898 [2024-12-05 21:24:41.229396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.898 qpair failed and we were unable to recover it. 00:31:39.898 [2024-12-05 21:24:41.239307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.898 [2024-12-05 21:24:41.239359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.898 [2024-12-05 21:24:41.239373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.898 [2024-12-05 21:24:41.239384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.898 [2024-12-05 21:24:41.239390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.898 [2024-12-05 21:24:41.239404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.898 qpair failed and we were unable to recover it. 00:31:39.898 [2024-12-05 21:24:41.249350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.898 [2024-12-05 21:24:41.249401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.898 [2024-12-05 21:24:41.249415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.898 [2024-12-05 21:24:41.249422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.898 [2024-12-05 21:24:41.249429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.898 [2024-12-05 21:24:41.249443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.898 qpair failed and we were unable to recover it. 00:31:39.898 [2024-12-05 21:24:41.259355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.898 [2024-12-05 21:24:41.259403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.898 [2024-12-05 21:24:41.259416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.898 [2024-12-05 21:24:41.259423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.898 [2024-12-05 21:24:41.259430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.898 [2024-12-05 21:24:41.259443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.898 qpair failed and we were unable to recover it. 00:31:39.898 [2024-12-05 21:24:41.269414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.898 [2024-12-05 21:24:41.269471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.898 [2024-12-05 21:24:41.269485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.898 [2024-12-05 21:24:41.269492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.898 [2024-12-05 21:24:41.269499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.898 [2024-12-05 21:24:41.269512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.898 qpair failed and we were unable to recover it. 00:31:39.898 [2024-12-05 21:24:41.279416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.898 [2024-12-05 21:24:41.279465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.898 [2024-12-05 21:24:41.279479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.898 [2024-12-05 21:24:41.279486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.898 [2024-12-05 21:24:41.279492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.898 [2024-12-05 21:24:41.279510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.898 qpair failed and we were unable to recover it. 00:31:39.898 [2024-12-05 21:24:41.289483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.898 [2024-12-05 21:24:41.289537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.898 [2024-12-05 21:24:41.289551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.898 [2024-12-05 21:24:41.289559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.898 [2024-12-05 21:24:41.289565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.898 [2024-12-05 21:24:41.289579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.898 qpair failed and we were unable to recover it. 00:31:39.898 [2024-12-05 21:24:41.299450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.898 [2024-12-05 21:24:41.299502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.898 [2024-12-05 21:24:41.299527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.898 [2024-12-05 21:24:41.299536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.898 [2024-12-05 21:24:41.299544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.898 [2024-12-05 21:24:41.299564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.898 qpair failed and we were unable to recover it. 00:31:39.898 [2024-12-05 21:24:41.309546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.898 [2024-12-05 21:24:41.309609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.898 [2024-12-05 21:24:41.309634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.899 [2024-12-05 21:24:41.309643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.899 [2024-12-05 21:24:41.309650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.899 [2024-12-05 21:24:41.309670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.899 qpair failed and we were unable to recover it. 00:31:39.899 [2024-12-05 21:24:41.319518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.899 [2024-12-05 21:24:41.319572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.899 [2024-12-05 21:24:41.319588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.899 [2024-12-05 21:24:41.319595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.899 [2024-12-05 21:24:41.319602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.899 [2024-12-05 21:24:41.319617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.899 qpair failed and we were unable to recover it. 00:31:39.899 [2024-12-05 21:24:41.329577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.899 [2024-12-05 21:24:41.329630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.899 [2024-12-05 21:24:41.329645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.899 [2024-12-05 21:24:41.329652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.899 [2024-12-05 21:24:41.329659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:39.899 [2024-12-05 21:24:41.329673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.899 qpair failed and we were unable to recover it. 00:31:40.161 [2024-12-05 21:24:41.339573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.161 [2024-12-05 21:24:41.339623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.161 [2024-12-05 21:24:41.339637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.161 [2024-12-05 21:24:41.339645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.161 [2024-12-05 21:24:41.339652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.161 [2024-12-05 21:24:41.339666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.161 qpair failed and we were unable to recover it. 00:31:40.161 [2024-12-05 21:24:41.349654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.161 [2024-12-05 21:24:41.349708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.161 [2024-12-05 21:24:41.349721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.161 [2024-12-05 21:24:41.349729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.161 [2024-12-05 21:24:41.349736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.161 [2024-12-05 21:24:41.349750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.161 qpair failed and we were unable to recover it. 00:31:40.161 [2024-12-05 21:24:41.359708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.161 [2024-12-05 21:24:41.359758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.161 [2024-12-05 21:24:41.359772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.161 [2024-12-05 21:24:41.359779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.161 [2024-12-05 21:24:41.359786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.161 [2024-12-05 21:24:41.359799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.161 qpair failed and we were unable to recover it. 00:31:40.161 [2024-12-05 21:24:41.369588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.161 [2024-12-05 21:24:41.369637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.161 [2024-12-05 21:24:41.369651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.161 [2024-12-05 21:24:41.369663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.161 [2024-12-05 21:24:41.369669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.162 [2024-12-05 21:24:41.369683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.162 qpair failed and we were unable to recover it. 00:31:40.162 [2024-12-05 21:24:41.379675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.162 [2024-12-05 21:24:41.379723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.162 [2024-12-05 21:24:41.379737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.162 [2024-12-05 21:24:41.379744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.162 [2024-12-05 21:24:41.379751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.162 [2024-12-05 21:24:41.379765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.162 qpair failed and we were unable to recover it. 00:31:40.162 [2024-12-05 21:24:41.389760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.162 [2024-12-05 21:24:41.389816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.162 [2024-12-05 21:24:41.389829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.162 [2024-12-05 21:24:41.389837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.162 [2024-12-05 21:24:41.389843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.162 [2024-12-05 21:24:41.389857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.162 qpair failed and we were unable to recover it. 00:31:40.162 [2024-12-05 21:24:41.399745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.162 [2024-12-05 21:24:41.399798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.162 [2024-12-05 21:24:41.399811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.162 [2024-12-05 21:24:41.399819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.162 [2024-12-05 21:24:41.399825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.162 [2024-12-05 21:24:41.399839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.162 qpair failed and we were unable to recover it. 00:31:40.162 [2024-12-05 21:24:41.409808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.162 [2024-12-05 21:24:41.409866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.162 [2024-12-05 21:24:41.409880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.162 [2024-12-05 21:24:41.409888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.162 [2024-12-05 21:24:41.409894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.162 [2024-12-05 21:24:41.409912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.162 qpair failed and we were unable to recover it. 00:31:40.162 [2024-12-05 21:24:41.419786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.162 [2024-12-05 21:24:41.419834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.162 [2024-12-05 21:24:41.419847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.162 [2024-12-05 21:24:41.419855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.162 [2024-12-05 21:24:41.419865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.162 [2024-12-05 21:24:41.419880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.162 qpair failed and we were unable to recover it. 00:31:40.162 [2024-12-05 21:24:41.429786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.162 [2024-12-05 21:24:41.429880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.162 [2024-12-05 21:24:41.429898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.162 [2024-12-05 21:24:41.429906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.162 [2024-12-05 21:24:41.429915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.162 [2024-12-05 21:24:41.429930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.162 qpair failed and we were unable to recover it. 00:31:40.162 [2024-12-05 21:24:41.439830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.162 [2024-12-05 21:24:41.439885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.162 [2024-12-05 21:24:41.439900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.162 [2024-12-05 21:24:41.439907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.162 [2024-12-05 21:24:41.439913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.162 [2024-12-05 21:24:41.439928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.162 qpair failed and we were unable to recover it. 00:31:40.162 [2024-12-05 21:24:41.449910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.162 [2024-12-05 21:24:41.449972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.162 [2024-12-05 21:24:41.449986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.162 [2024-12-05 21:24:41.449993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.162 [2024-12-05 21:24:41.450000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.162 [2024-12-05 21:24:41.450014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.162 qpair failed and we were unable to recover it. 00:31:40.162 [2024-12-05 21:24:41.459912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.162 [2024-12-05 21:24:41.459962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.162 [2024-12-05 21:24:41.459976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.162 [2024-12-05 21:24:41.459984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.162 [2024-12-05 21:24:41.459990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.162 [2024-12-05 21:24:41.460005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.162 qpair failed and we were unable to recover it. 00:31:40.162 [2024-12-05 21:24:41.469966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.162 [2024-12-05 21:24:41.470024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.162 [2024-12-05 21:24:41.470037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.162 [2024-12-05 21:24:41.470045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.162 [2024-12-05 21:24:41.470051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.162 [2024-12-05 21:24:41.470065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.162 qpair failed and we were unable to recover it. 00:31:40.162 [2024-12-05 21:24:41.479961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.162 [2024-12-05 21:24:41.480010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.162 [2024-12-05 21:24:41.480024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.162 [2024-12-05 21:24:41.480031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.162 [2024-12-05 21:24:41.480038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.162 [2024-12-05 21:24:41.480052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.162 qpair failed and we were unable to recover it. 00:31:40.162 [2024-12-05 21:24:41.490020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.162 [2024-12-05 21:24:41.490077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.162 [2024-12-05 21:24:41.490090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.162 [2024-12-05 21:24:41.490098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.162 [2024-12-05 21:24:41.490104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.162 [2024-12-05 21:24:41.490118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.162 qpair failed and we were unable to recover it. 00:31:40.162 [2024-12-05 21:24:41.499998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.162 [2024-12-05 21:24:41.500050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.162 [2024-12-05 21:24:41.500063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.162 [2024-12-05 21:24:41.500074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.162 [2024-12-05 21:24:41.500081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.162 [2024-12-05 21:24:41.500095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.162 qpair failed and we were unable to recover it. 00:31:40.163 [2024-12-05 21:24:41.510122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.163 [2024-12-05 21:24:41.510203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.163 [2024-12-05 21:24:41.510217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.163 [2024-12-05 21:24:41.510225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.163 [2024-12-05 21:24:41.510232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.163 [2024-12-05 21:24:41.510245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.163 qpair failed and we were unable to recover it. 00:31:40.163 [2024-12-05 21:24:41.519971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.163 [2024-12-05 21:24:41.520025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.163 [2024-12-05 21:24:41.520038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.163 [2024-12-05 21:24:41.520046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.163 [2024-12-05 21:24:41.520052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.163 [2024-12-05 21:24:41.520066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.163 qpair failed and we were unable to recover it. 00:31:40.163 [2024-12-05 21:24:41.530158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.163 [2024-12-05 21:24:41.530248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.163 [2024-12-05 21:24:41.530261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.163 [2024-12-05 21:24:41.530269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.163 [2024-12-05 21:24:41.530276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.163 [2024-12-05 21:24:41.530290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.163 qpair failed and we were unable to recover it. 00:31:40.163 [2024-12-05 21:24:41.540144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.163 [2024-12-05 21:24:41.540195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.163 [2024-12-05 21:24:41.540209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.163 [2024-12-05 21:24:41.540216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.163 [2024-12-05 21:24:41.540223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.163 [2024-12-05 21:24:41.540240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.163 qpair failed and we were unable to recover it. 00:31:40.163 [2024-12-05 21:24:41.550189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.163 [2024-12-05 21:24:41.550246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.163 [2024-12-05 21:24:41.550259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.163 [2024-12-05 21:24:41.550267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.163 [2024-12-05 21:24:41.550273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.163 [2024-12-05 21:24:41.550287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.163 qpair failed and we were unable to recover it. 00:31:40.163 [2024-12-05 21:24:41.560201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.163 [2024-12-05 21:24:41.560249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.163 [2024-12-05 21:24:41.560265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.163 [2024-12-05 21:24:41.560272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.163 [2024-12-05 21:24:41.560279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.163 [2024-12-05 21:24:41.560294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.163 qpair failed and we were unable to recover it. 00:31:40.163 [2024-12-05 21:24:41.570253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.163 [2024-12-05 21:24:41.570304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.163 [2024-12-05 21:24:41.570318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.163 [2024-12-05 21:24:41.570325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.163 [2024-12-05 21:24:41.570332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.163 [2024-12-05 21:24:41.570345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.163 qpair failed and we were unable to recover it. 00:31:40.163 [2024-12-05 21:24:41.580215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.163 [2024-12-05 21:24:41.580309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.163 [2024-12-05 21:24:41.580323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.163 [2024-12-05 21:24:41.580330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.163 [2024-12-05 21:24:41.580337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.163 [2024-12-05 21:24:41.580351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.163 qpair failed and we were unable to recover it. 00:31:40.163 [2024-12-05 21:24:41.590311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.163 [2024-12-05 21:24:41.590385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.163 [2024-12-05 21:24:41.590398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.163 [2024-12-05 21:24:41.590406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.163 [2024-12-05 21:24:41.590413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.163 [2024-12-05 21:24:41.590426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.163 qpair failed and we were unable to recover it. 00:31:40.427 [2024-12-05 21:24:41.600275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.427 [2024-12-05 21:24:41.600324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.427 [2024-12-05 21:24:41.600338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.427 [2024-12-05 21:24:41.600346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.427 [2024-12-05 21:24:41.600352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.427 [2024-12-05 21:24:41.600366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.427 qpair failed and we were unable to recover it. 00:31:40.427 [2024-12-05 21:24:41.610364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.427 [2024-12-05 21:24:41.610420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.427 [2024-12-05 21:24:41.610434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.427 [2024-12-05 21:24:41.610441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.427 [2024-12-05 21:24:41.610448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.427 [2024-12-05 21:24:41.610462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.427 qpair failed and we were unable to recover it. 00:31:40.427 [2024-12-05 21:24:41.620237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.427 [2024-12-05 21:24:41.620287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.427 [2024-12-05 21:24:41.620300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.427 [2024-12-05 21:24:41.620308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.427 [2024-12-05 21:24:41.620314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.427 [2024-12-05 21:24:41.620328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.427 qpair failed and we were unable to recover it. 00:31:40.427 [2024-12-05 21:24:41.630424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.427 [2024-12-05 21:24:41.630485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.427 [2024-12-05 21:24:41.630498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.427 [2024-12-05 21:24:41.630510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.427 [2024-12-05 21:24:41.630517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.427 [2024-12-05 21:24:41.630531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.427 qpair failed and we were unable to recover it. 00:31:40.427 [2024-12-05 21:24:41.640415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.427 [2024-12-05 21:24:41.640477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.427 [2024-12-05 21:24:41.640491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.427 [2024-12-05 21:24:41.640498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.427 [2024-12-05 21:24:41.640505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.427 [2024-12-05 21:24:41.640518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.427 qpair failed and we were unable to recover it. 00:31:40.427 [2024-12-05 21:24:41.650461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.427 [2024-12-05 21:24:41.650556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.427 [2024-12-05 21:24:41.650570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.427 [2024-12-05 21:24:41.650578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.427 [2024-12-05 21:24:41.650584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.427 [2024-12-05 21:24:41.650598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.427 qpair failed and we were unable to recover it. 00:31:40.427 [2024-12-05 21:24:41.660338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.427 [2024-12-05 21:24:41.660387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.427 [2024-12-05 21:24:41.660400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.427 [2024-12-05 21:24:41.660407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.427 [2024-12-05 21:24:41.660414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.427 [2024-12-05 21:24:41.660427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.427 qpair failed and we were unable to recover it. 00:31:40.427 [2024-12-05 21:24:41.670528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.427 [2024-12-05 21:24:41.670582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.427 [2024-12-05 21:24:41.670596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.427 [2024-12-05 21:24:41.670603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.427 [2024-12-05 21:24:41.670609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.428 [2024-12-05 21:24:41.670626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.428 qpair failed and we were unable to recover it. 00:31:40.428 [2024-12-05 21:24:41.680541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.428 [2024-12-05 21:24:41.680614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.428 [2024-12-05 21:24:41.680628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.428 [2024-12-05 21:24:41.680635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.428 [2024-12-05 21:24:41.680642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.428 [2024-12-05 21:24:41.680657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.428 qpair failed and we were unable to recover it. 00:31:40.428 [2024-12-05 21:24:41.690550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.428 [2024-12-05 21:24:41.690602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.428 [2024-12-05 21:24:41.690616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.428 [2024-12-05 21:24:41.690623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.428 [2024-12-05 21:24:41.690630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.428 [2024-12-05 21:24:41.690644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.428 qpair failed and we were unable to recover it. 00:31:40.428 [2024-12-05 21:24:41.700434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.428 [2024-12-05 21:24:41.700484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.428 [2024-12-05 21:24:41.700499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.428 [2024-12-05 21:24:41.700506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.428 [2024-12-05 21:24:41.700513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.428 [2024-12-05 21:24:41.700527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.428 qpair failed and we were unable to recover it. 00:31:40.428 [2024-12-05 21:24:41.710640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.428 [2024-12-05 21:24:41.710695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.428 [2024-12-05 21:24:41.710708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.428 [2024-12-05 21:24:41.710716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.428 [2024-12-05 21:24:41.710723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.428 [2024-12-05 21:24:41.710737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.428 qpair failed and we were unable to recover it. 00:31:40.428 [2024-12-05 21:24:41.720503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.428 [2024-12-05 21:24:41.720562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.428 [2024-12-05 21:24:41.720576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.428 [2024-12-05 21:24:41.720584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.428 [2024-12-05 21:24:41.720590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.428 [2024-12-05 21:24:41.720604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.428 qpair failed and we were unable to recover it. 00:31:40.428 [2024-12-05 21:24:41.730660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.428 [2024-12-05 21:24:41.730712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.428 [2024-12-05 21:24:41.730726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.428 [2024-12-05 21:24:41.730733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.428 [2024-12-05 21:24:41.730740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.428 [2024-12-05 21:24:41.730754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.428 qpair failed and we were unable to recover it. 00:31:40.428 [2024-12-05 21:24:41.740674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.428 [2024-12-05 21:24:41.740726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.428 [2024-12-05 21:24:41.740740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.428 [2024-12-05 21:24:41.740748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.428 [2024-12-05 21:24:41.740754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.428 [2024-12-05 21:24:41.740768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.428 qpair failed and we were unable to recover it. 00:31:40.428 [2024-12-05 21:24:41.750741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.428 [2024-12-05 21:24:41.750796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.428 [2024-12-05 21:24:41.750809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.428 [2024-12-05 21:24:41.750817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.428 [2024-12-05 21:24:41.750824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.428 [2024-12-05 21:24:41.750837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.428 qpair failed and we were unable to recover it. 00:31:40.428 [2024-12-05 21:24:41.760734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.428 [2024-12-05 21:24:41.760816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.428 [2024-12-05 21:24:41.760830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.428 [2024-12-05 21:24:41.760840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.428 [2024-12-05 21:24:41.760848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.428 [2024-12-05 21:24:41.760866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.428 qpair failed and we were unable to recover it. 00:31:40.428 [2024-12-05 21:24:41.770789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.428 [2024-12-05 21:24:41.770842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.428 [2024-12-05 21:24:41.770855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.428 [2024-12-05 21:24:41.770866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.428 [2024-12-05 21:24:41.770873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.428 [2024-12-05 21:24:41.770889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.428 qpair failed and we were unable to recover it. 00:31:40.428 [2024-12-05 21:24:41.780682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.428 [2024-12-05 21:24:41.780780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.428 [2024-12-05 21:24:41.780797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.428 [2024-12-05 21:24:41.780805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.428 [2024-12-05 21:24:41.780815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.428 [2024-12-05 21:24:41.780832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.428 qpair failed and we were unable to recover it. 00:31:40.428 [2024-12-05 21:24:41.790846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.428 [2024-12-05 21:24:41.790911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.428 [2024-12-05 21:24:41.790926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.428 [2024-12-05 21:24:41.790933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.428 [2024-12-05 21:24:41.790940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.428 [2024-12-05 21:24:41.790954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.428 qpair failed and we were unable to recover it. 00:31:40.428 [2024-12-05 21:24:41.800859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.428 [2024-12-05 21:24:41.800912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.428 [2024-12-05 21:24:41.800925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.428 [2024-12-05 21:24:41.800933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.428 [2024-12-05 21:24:41.800940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.429 [2024-12-05 21:24:41.800957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.429 qpair failed and we were unable to recover it. 00:31:40.429 [2024-12-05 21:24:41.810888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.429 [2024-12-05 21:24:41.810947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.429 [2024-12-05 21:24:41.810962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.429 [2024-12-05 21:24:41.810969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.429 [2024-12-05 21:24:41.810976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.429 [2024-12-05 21:24:41.810990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.429 qpair failed and we were unable to recover it. 00:31:40.429 [2024-12-05 21:24:41.820892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.429 [2024-12-05 21:24:41.820946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.429 [2024-12-05 21:24:41.820960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.429 [2024-12-05 21:24:41.820967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.429 [2024-12-05 21:24:41.820974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.429 [2024-12-05 21:24:41.820988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.429 qpair failed and we were unable to recover it. 00:31:40.429 [2024-12-05 21:24:41.830971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.429 [2024-12-05 21:24:41.831056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.429 [2024-12-05 21:24:41.831069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.429 [2024-12-05 21:24:41.831077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.429 [2024-12-05 21:24:41.831084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.429 [2024-12-05 21:24:41.831097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.429 qpair failed and we were unable to recover it. 00:31:40.429 [2024-12-05 21:24:41.840836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.429 [2024-12-05 21:24:41.840886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.429 [2024-12-05 21:24:41.840901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.429 [2024-12-05 21:24:41.840908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.429 [2024-12-05 21:24:41.840915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.429 [2024-12-05 21:24:41.840929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.429 qpair failed and we were unable to recover it. 00:31:40.429 [2024-12-05 21:24:41.851020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.429 [2024-12-05 21:24:41.851098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.429 [2024-12-05 21:24:41.851111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.429 [2024-12-05 21:24:41.851118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.429 [2024-12-05 21:24:41.851125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.429 [2024-12-05 21:24:41.851140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.429 qpair failed and we were unable to recover it. 00:31:40.429 [2024-12-05 21:24:41.860992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.429 [2024-12-05 21:24:41.861050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.429 [2024-12-05 21:24:41.861063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.429 [2024-12-05 21:24:41.861070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.429 [2024-12-05 21:24:41.861077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.693 [2024-12-05 21:24:41.861091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.693 qpair failed and we were unable to recover it. 00:31:40.693 [2024-12-05 21:24:41.871065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.693 [2024-12-05 21:24:41.871123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.693 [2024-12-05 21:24:41.871137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.693 [2024-12-05 21:24:41.871144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.693 [2024-12-05 21:24:41.871151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.693 [2024-12-05 21:24:41.871164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.693 qpair failed and we were unable to recover it. 00:31:40.693 [2024-12-05 21:24:41.881105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.693 [2024-12-05 21:24:41.881161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.693 [2024-12-05 21:24:41.881174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.693 [2024-12-05 21:24:41.881182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.693 [2024-12-05 21:24:41.881189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.693 [2024-12-05 21:24:41.881203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.693 qpair failed and we were unable to recover it. 00:31:40.693 [2024-12-05 21:24:41.891132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.693 [2024-12-05 21:24:41.891225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.693 [2024-12-05 21:24:41.891238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.693 [2024-12-05 21:24:41.891250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.693 [2024-12-05 21:24:41.891257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.693 [2024-12-05 21:24:41.891271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.693 qpair failed and we were unable to recover it. 00:31:40.693 [2024-12-05 21:24:41.901089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.693 [2024-12-05 21:24:41.901138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.693 [2024-12-05 21:24:41.901151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.693 [2024-12-05 21:24:41.901159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.693 [2024-12-05 21:24:41.901166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.693 [2024-12-05 21:24:41.901179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.693 qpair failed and we were unable to recover it. 00:31:40.693 [2024-12-05 21:24:41.911164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.693 [2024-12-05 21:24:41.911219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.693 [2024-12-05 21:24:41.911232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.693 [2024-12-05 21:24:41.911239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.693 [2024-12-05 21:24:41.911247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.693 [2024-12-05 21:24:41.911260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.693 qpair failed and we were unable to recover it. 00:31:40.693 [2024-12-05 21:24:41.921190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.693 [2024-12-05 21:24:41.921242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.693 [2024-12-05 21:24:41.921255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.693 [2024-12-05 21:24:41.921263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.693 [2024-12-05 21:24:41.921269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.693 [2024-12-05 21:24:41.921283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.693 qpair failed and we were unable to recover it. 00:31:40.693 [2024-12-05 21:24:41.931198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.693 [2024-12-05 21:24:41.931250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.693 [2024-12-05 21:24:41.931264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.693 [2024-12-05 21:24:41.931271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.693 [2024-12-05 21:24:41.931277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.693 [2024-12-05 21:24:41.931294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.693 qpair failed and we were unable to recover it. 00:31:40.693 [2024-12-05 21:24:41.941264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.693 [2024-12-05 21:24:41.941350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.693 [2024-12-05 21:24:41.941364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.693 [2024-12-05 21:24:41.941371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.693 [2024-12-05 21:24:41.941378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.693 [2024-12-05 21:24:41.941392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.693 qpair failed and we were unable to recover it. 00:31:40.693 [2024-12-05 21:24:41.951266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.693 [2024-12-05 21:24:41.951331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.693 [2024-12-05 21:24:41.951344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.693 [2024-12-05 21:24:41.951351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.693 [2024-12-05 21:24:41.951358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.693 [2024-12-05 21:24:41.951372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.693 qpair failed and we were unable to recover it. 00:31:40.693 [2024-12-05 21:24:41.961179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.693 [2024-12-05 21:24:41.961241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.694 [2024-12-05 21:24:41.961255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.694 [2024-12-05 21:24:41.961263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.694 [2024-12-05 21:24:41.961270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.694 [2024-12-05 21:24:41.961283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.694 qpair failed and we were unable to recover it. 00:31:40.694 [2024-12-05 21:24:41.971349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.694 [2024-12-05 21:24:41.971405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.694 [2024-12-05 21:24:41.971418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.694 [2024-12-05 21:24:41.971426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.694 [2024-12-05 21:24:41.971432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.694 [2024-12-05 21:24:41.971446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.694 qpair failed and we were unable to recover it. 00:31:40.694 [2024-12-05 21:24:41.981221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.694 [2024-12-05 21:24:41.981287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.694 [2024-12-05 21:24:41.981301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.694 [2024-12-05 21:24:41.981308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.694 [2024-12-05 21:24:41.981315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.694 [2024-12-05 21:24:41.981329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.694 qpair failed and we were unable to recover it. 00:31:40.694 [2024-12-05 21:24:41.991407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.694 [2024-12-05 21:24:41.991464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.694 [2024-12-05 21:24:41.991477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.694 [2024-12-05 21:24:41.991485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.694 [2024-12-05 21:24:41.991491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.694 [2024-12-05 21:24:41.991505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.694 qpair failed and we were unable to recover it. 00:31:40.694 [2024-12-05 21:24:42.001376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.694 [2024-12-05 21:24:42.001424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.694 [2024-12-05 21:24:42.001438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.694 [2024-12-05 21:24:42.001445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.694 [2024-12-05 21:24:42.001452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.694 [2024-12-05 21:24:42.001465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.694 qpair failed and we were unable to recover it. 00:31:40.694 [2024-12-05 21:24:42.011326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.694 [2024-12-05 21:24:42.011383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.694 [2024-12-05 21:24:42.011396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.694 [2024-12-05 21:24:42.011404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.694 [2024-12-05 21:24:42.011410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.694 [2024-12-05 21:24:42.011424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.694 qpair failed and we were unable to recover it. 00:31:40.694 [2024-12-05 21:24:42.021313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.694 [2024-12-05 21:24:42.021362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.694 [2024-12-05 21:24:42.021376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.694 [2024-12-05 21:24:42.021391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.694 [2024-12-05 21:24:42.021397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.694 [2024-12-05 21:24:42.021411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.694 qpair failed and we were unable to recover it. 00:31:40.694 [2024-12-05 21:24:42.031508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.694 [2024-12-05 21:24:42.031565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.694 [2024-12-05 21:24:42.031578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.694 [2024-12-05 21:24:42.031586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.694 [2024-12-05 21:24:42.031592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.694 [2024-12-05 21:24:42.031606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.694 qpair failed and we were unable to recover it. 00:31:40.694 [2024-12-05 21:24:42.041493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.694 [2024-12-05 21:24:42.041554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.694 [2024-12-05 21:24:42.041568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.694 [2024-12-05 21:24:42.041576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.694 [2024-12-05 21:24:42.041583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.694 [2024-12-05 21:24:42.041596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.694 qpair failed and we were unable to recover it. 00:31:40.694 [2024-12-05 21:24:42.051509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.694 [2024-12-05 21:24:42.051558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.694 [2024-12-05 21:24:42.051572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.694 [2024-12-05 21:24:42.051579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.694 [2024-12-05 21:24:42.051586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.694 [2024-12-05 21:24:42.051600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.694 qpair failed and we were unable to recover it. 00:31:40.694 [2024-12-05 21:24:42.061551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.694 [2024-12-05 21:24:42.061605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.694 [2024-12-05 21:24:42.061619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.694 [2024-12-05 21:24:42.061626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.694 [2024-12-05 21:24:42.061633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.694 [2024-12-05 21:24:42.061650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.694 qpair failed and we were unable to recover it. 00:31:40.694 [2024-12-05 21:24:42.071655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.694 [2024-12-05 21:24:42.071710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.694 [2024-12-05 21:24:42.071723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.694 [2024-12-05 21:24:42.071731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.694 [2024-12-05 21:24:42.071737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.694 [2024-12-05 21:24:42.071751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.694 qpair failed and we were unable to recover it. 00:31:40.694 [2024-12-05 21:24:42.081585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.694 [2024-12-05 21:24:42.081638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.694 [2024-12-05 21:24:42.081652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.694 [2024-12-05 21:24:42.081659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.694 [2024-12-05 21:24:42.081666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.694 [2024-12-05 21:24:42.081679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.694 qpair failed and we were unable to recover it. 00:31:40.694 [2024-12-05 21:24:42.091505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.694 [2024-12-05 21:24:42.091551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.694 [2024-12-05 21:24:42.091566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.694 [2024-12-05 21:24:42.091574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.695 [2024-12-05 21:24:42.091580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.695 [2024-12-05 21:24:42.091595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.695 qpair failed and we were unable to recover it. 00:31:40.695 [2024-12-05 21:24:42.101566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.695 [2024-12-05 21:24:42.101621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.695 [2024-12-05 21:24:42.101635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.695 [2024-12-05 21:24:42.101642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.695 [2024-12-05 21:24:42.101649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.695 [2024-12-05 21:24:42.101663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.695 qpair failed and we were unable to recover it. 00:31:40.695 [2024-12-05 21:24:42.111730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.695 [2024-12-05 21:24:42.111786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.695 [2024-12-05 21:24:42.111800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.695 [2024-12-05 21:24:42.111807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.695 [2024-12-05 21:24:42.111814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.695 [2024-12-05 21:24:42.111828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.695 qpair failed and we were unable to recover it. 00:31:40.695 [2024-12-05 21:24:42.121612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.695 [2024-12-05 21:24:42.121663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.695 [2024-12-05 21:24:42.121677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.695 [2024-12-05 21:24:42.121684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.695 [2024-12-05 21:24:42.121691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.695 [2024-12-05 21:24:42.121705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.695 qpair failed and we were unable to recover it. 00:31:40.957 [2024-12-05 21:24:42.131744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.957 [2024-12-05 21:24:42.131791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.958 [2024-12-05 21:24:42.131804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.958 [2024-12-05 21:24:42.131812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.958 [2024-12-05 21:24:42.131818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.958 [2024-12-05 21:24:42.131832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-12-05 21:24:42.141760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.958 [2024-12-05 21:24:42.141809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.958 [2024-12-05 21:24:42.141823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.958 [2024-12-05 21:24:42.141832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.958 [2024-12-05 21:24:42.141839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.958 [2024-12-05 21:24:42.141852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-12-05 21:24:42.151824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.958 [2024-12-05 21:24:42.151889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.958 [2024-12-05 21:24:42.151903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.958 [2024-12-05 21:24:42.151914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.958 [2024-12-05 21:24:42.151920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.958 [2024-12-05 21:24:42.151935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-12-05 21:24:42.161816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.958 [2024-12-05 21:24:42.161871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.958 [2024-12-05 21:24:42.161885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.958 [2024-12-05 21:24:42.161892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.958 [2024-12-05 21:24:42.161899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.958 [2024-12-05 21:24:42.161913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-12-05 21:24:42.171731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.958 [2024-12-05 21:24:42.171790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.958 [2024-12-05 21:24:42.171805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.958 [2024-12-05 21:24:42.171812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.958 [2024-12-05 21:24:42.171819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.958 [2024-12-05 21:24:42.171834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-12-05 21:24:42.181879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.958 [2024-12-05 21:24:42.181929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.958 [2024-12-05 21:24:42.181943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.958 [2024-12-05 21:24:42.181951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.958 [2024-12-05 21:24:42.181958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.958 [2024-12-05 21:24:42.181971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-12-05 21:24:42.191955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.958 [2024-12-05 21:24:42.192047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.958 [2024-12-05 21:24:42.192062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.958 [2024-12-05 21:24:42.192069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.958 [2024-12-05 21:24:42.192077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.958 [2024-12-05 21:24:42.192095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-12-05 21:24:42.201937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.958 [2024-12-05 21:24:42.201998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.958 [2024-12-05 21:24:42.202012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.958 [2024-12-05 21:24:42.202019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.958 [2024-12-05 21:24:42.202026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.958 [2024-12-05 21:24:42.202040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-12-05 21:24:42.211827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.958 [2024-12-05 21:24:42.211881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.958 [2024-12-05 21:24:42.211894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.958 [2024-12-05 21:24:42.211902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.958 [2024-12-05 21:24:42.211908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.958 [2024-12-05 21:24:42.211922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-12-05 21:24:42.222006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.958 [2024-12-05 21:24:42.222086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.958 [2024-12-05 21:24:42.222100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.958 [2024-12-05 21:24:42.222107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.958 [2024-12-05 21:24:42.222114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.958 [2024-12-05 21:24:42.222128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-12-05 21:24:42.232066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.958 [2024-12-05 21:24:42.232125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.958 [2024-12-05 21:24:42.232139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.958 [2024-12-05 21:24:42.232146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.958 [2024-12-05 21:24:42.232153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.958 [2024-12-05 21:24:42.232167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-12-05 21:24:42.242053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.958 [2024-12-05 21:24:42.242108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.958 [2024-12-05 21:24:42.242124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.958 [2024-12-05 21:24:42.242132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.958 [2024-12-05 21:24:42.242142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.958 [2024-12-05 21:24:42.242157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-12-05 21:24:42.251943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.958 [2024-12-05 21:24:42.251990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.958 [2024-12-05 21:24:42.252004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.958 [2024-12-05 21:24:42.252011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.958 [2024-12-05 21:24:42.252018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.958 [2024-12-05 21:24:42.252032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.958 qpair failed and we were unable to recover it. 00:31:40.958 [2024-12-05 21:24:42.262106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.959 [2024-12-05 21:24:42.262160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.959 [2024-12-05 21:24:42.262174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.959 [2024-12-05 21:24:42.262182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.959 [2024-12-05 21:24:42.262188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.959 [2024-12-05 21:24:42.262202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-12-05 21:24:42.272172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.959 [2024-12-05 21:24:42.272229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.959 [2024-12-05 21:24:42.272242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.959 [2024-12-05 21:24:42.272249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.959 [2024-12-05 21:24:42.272256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.959 [2024-12-05 21:24:42.272270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-12-05 21:24:42.282201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.959 [2024-12-05 21:24:42.282247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.959 [2024-12-05 21:24:42.282261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.959 [2024-12-05 21:24:42.282272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.959 [2024-12-05 21:24:42.282280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.959 [2024-12-05 21:24:42.282295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-12-05 21:24:42.292170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.959 [2024-12-05 21:24:42.292219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.959 [2024-12-05 21:24:42.292232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.959 [2024-12-05 21:24:42.292240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.959 [2024-12-05 21:24:42.292246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.959 [2024-12-05 21:24:42.292260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-12-05 21:24:42.302065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.959 [2024-12-05 21:24:42.302132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.959 [2024-12-05 21:24:42.302145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.959 [2024-12-05 21:24:42.302153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.959 [2024-12-05 21:24:42.302159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.959 [2024-12-05 21:24:42.302173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-12-05 21:24:42.312132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.959 [2024-12-05 21:24:42.312202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.959 [2024-12-05 21:24:42.312216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.959 [2024-12-05 21:24:42.312223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.959 [2024-12-05 21:24:42.312230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.959 [2024-12-05 21:24:42.312243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-12-05 21:24:42.322255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.959 [2024-12-05 21:24:42.322307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.959 [2024-12-05 21:24:42.322320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.959 [2024-12-05 21:24:42.322327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.959 [2024-12-05 21:24:42.322334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.959 [2024-12-05 21:24:42.322350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-12-05 21:24:42.332281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.959 [2024-12-05 21:24:42.332333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.959 [2024-12-05 21:24:42.332346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.959 [2024-12-05 21:24:42.332353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.959 [2024-12-05 21:24:42.332360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.959 [2024-12-05 21:24:42.332373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-12-05 21:24:42.342307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.959 [2024-12-05 21:24:42.342355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.959 [2024-12-05 21:24:42.342369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.959 [2024-12-05 21:24:42.342376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.959 [2024-12-05 21:24:42.342383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.959 [2024-12-05 21:24:42.342396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-12-05 21:24:42.352244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.959 [2024-12-05 21:24:42.352299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.959 [2024-12-05 21:24:42.352312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.959 [2024-12-05 21:24:42.352320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.959 [2024-12-05 21:24:42.352326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.959 [2024-12-05 21:24:42.352340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-12-05 21:24:42.362370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.959 [2024-12-05 21:24:42.362436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.959 [2024-12-05 21:24:42.362449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.959 [2024-12-05 21:24:42.362457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.959 [2024-12-05 21:24:42.362463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.959 [2024-12-05 21:24:42.362476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-12-05 21:24:42.372371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.959 [2024-12-05 21:24:42.372417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.959 [2024-12-05 21:24:42.372431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.959 [2024-12-05 21:24:42.372438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.959 [2024-12-05 21:24:42.372445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.959 [2024-12-05 21:24:42.372458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.959 qpair failed and we were unable to recover it. 00:31:40.959 [2024-12-05 21:24:42.382401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.959 [2024-12-05 21:24:42.382450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.959 [2024-12-05 21:24:42.382464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.959 [2024-12-05 21:24:42.382471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.959 [2024-12-05 21:24:42.382478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:40.959 [2024-12-05 21:24:42.382491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.959 qpair failed and we were unable to recover it. 00:31:41.222 [2024-12-05 21:24:42.392491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.222 [2024-12-05 21:24:42.392547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.222 [2024-12-05 21:24:42.392561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.222 [2024-12-05 21:24:42.392568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.222 [2024-12-05 21:24:42.392575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.222 [2024-12-05 21:24:42.392589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.222 qpair failed and we were unable to recover it. 00:31:41.222 [2024-12-05 21:24:42.402475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.222 [2024-12-05 21:24:42.402523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.222 [2024-12-05 21:24:42.402537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.222 [2024-12-05 21:24:42.402545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.222 [2024-12-05 21:24:42.402552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.222 [2024-12-05 21:24:42.402566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.222 qpair failed and we were unable to recover it. 00:31:41.222 [2024-12-05 21:24:42.412478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.222 [2024-12-05 21:24:42.412526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.222 [2024-12-05 21:24:42.412539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.222 [2024-12-05 21:24:42.412551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.222 [2024-12-05 21:24:42.412557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.222 [2024-12-05 21:24:42.412571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.222 qpair failed and we were unable to recover it. 00:31:41.222 [2024-12-05 21:24:42.422426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.222 [2024-12-05 21:24:42.422475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.222 [2024-12-05 21:24:42.422489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.222 [2024-12-05 21:24:42.422497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.222 [2024-12-05 21:24:42.422503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.222 [2024-12-05 21:24:42.422517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.222 qpair failed and we were unable to recover it. 00:31:41.222 [2024-12-05 21:24:42.432585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.222 [2024-12-05 21:24:42.432640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.222 [2024-12-05 21:24:42.432654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.222 [2024-12-05 21:24:42.432661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.222 [2024-12-05 21:24:42.432668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.222 [2024-12-05 21:24:42.432682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.222 qpair failed and we were unable to recover it. 00:31:41.223 [2024-12-05 21:24:42.442589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.223 [2024-12-05 21:24:42.442654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.223 [2024-12-05 21:24:42.442680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.223 [2024-12-05 21:24:42.442689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.223 [2024-12-05 21:24:42.442697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.223 [2024-12-05 21:24:42.442717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.223 qpair failed and we were unable to recover it. 00:31:41.223 [2024-12-05 21:24:42.452595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.223 [2024-12-05 21:24:42.452642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.223 [2024-12-05 21:24:42.452658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.223 [2024-12-05 21:24:42.452666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.223 [2024-12-05 21:24:42.452673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.223 [2024-12-05 21:24:42.452693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.223 qpair failed and we were unable to recover it. 00:31:41.223 [2024-12-05 21:24:42.462636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.223 [2024-12-05 21:24:42.462683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.223 [2024-12-05 21:24:42.462698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.223 [2024-12-05 21:24:42.462705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.223 [2024-12-05 21:24:42.462712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.223 [2024-12-05 21:24:42.462726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.223 qpair failed and we were unable to recover it. 00:31:41.223 [2024-12-05 21:24:42.472704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.223 [2024-12-05 21:24:42.472758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.223 [2024-12-05 21:24:42.472772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.223 [2024-12-05 21:24:42.472779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.223 [2024-12-05 21:24:42.472786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.223 [2024-12-05 21:24:42.472800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.223 qpair failed and we were unable to recover it. 00:31:41.223 [2024-12-05 21:24:42.482616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.223 [2024-12-05 21:24:42.482662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.223 [2024-12-05 21:24:42.482675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.223 [2024-12-05 21:24:42.482683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.223 [2024-12-05 21:24:42.482689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.223 [2024-12-05 21:24:42.482703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.223 qpair failed and we were unable to recover it. 00:31:41.223 [2024-12-05 21:24:42.492688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.223 [2024-12-05 21:24:42.492730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.223 [2024-12-05 21:24:42.492744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.223 [2024-12-05 21:24:42.492751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.223 [2024-12-05 21:24:42.492758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.223 [2024-12-05 21:24:42.492772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.223 qpair failed and we were unable to recover it. 00:31:41.223 [2024-12-05 21:24:42.502751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.223 [2024-12-05 21:24:42.502799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.223 [2024-12-05 21:24:42.502813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.223 [2024-12-05 21:24:42.502821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.223 [2024-12-05 21:24:42.502827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.223 [2024-12-05 21:24:42.502842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.223 qpair failed and we were unable to recover it. 00:31:41.223 [2024-12-05 21:24:42.512763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.223 [2024-12-05 21:24:42.512810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.223 [2024-12-05 21:24:42.512824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.223 [2024-12-05 21:24:42.512831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.223 [2024-12-05 21:24:42.512838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.223 [2024-12-05 21:24:42.512851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.223 qpair failed and we were unable to recover it. 00:31:41.223 [2024-12-05 21:24:42.522799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.223 [2024-12-05 21:24:42.522851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.223 [2024-12-05 21:24:42.522869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.223 [2024-12-05 21:24:42.522877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.223 [2024-12-05 21:24:42.522883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.223 [2024-12-05 21:24:42.522898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.223 qpair failed and we were unable to recover it. 00:31:41.223 [2024-12-05 21:24:42.532814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.223 [2024-12-05 21:24:42.532859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.223 [2024-12-05 21:24:42.532878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.223 [2024-12-05 21:24:42.532886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.223 [2024-12-05 21:24:42.532892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.223 [2024-12-05 21:24:42.532906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.223 qpair failed and we were unable to recover it. 00:31:41.223 [2024-12-05 21:24:42.542820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.223 [2024-12-05 21:24:42.542867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.223 [2024-12-05 21:24:42.542881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.223 [2024-12-05 21:24:42.542892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.223 [2024-12-05 21:24:42.542899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.223 [2024-12-05 21:24:42.542913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.223 qpair failed and we were unable to recover it. 00:31:41.223 [2024-12-05 21:24:42.552859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.223 [2024-12-05 21:24:42.552911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.223 [2024-12-05 21:24:42.552927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.223 [2024-12-05 21:24:42.552935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.223 [2024-12-05 21:24:42.552943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.223 [2024-12-05 21:24:42.552958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.223 qpair failed and we were unable to recover it. 00:31:41.223 [2024-12-05 21:24:42.562900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.223 [2024-12-05 21:24:42.562949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.223 [2024-12-05 21:24:42.562963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.223 [2024-12-05 21:24:42.562971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.223 [2024-12-05 21:24:42.562977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.223 [2024-12-05 21:24:42.562992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.223 qpair failed and we were unable to recover it. 00:31:41.223 [2024-12-05 21:24:42.572777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.224 [2024-12-05 21:24:42.572826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.224 [2024-12-05 21:24:42.572840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.224 [2024-12-05 21:24:42.572848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.224 [2024-12-05 21:24:42.572855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.224 [2024-12-05 21:24:42.572875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.224 qpair failed and we were unable to recover it. 00:31:41.224 [2024-12-05 21:24:42.582936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.224 [2024-12-05 21:24:42.582981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.224 [2024-12-05 21:24:42.582995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.224 [2024-12-05 21:24:42.583003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.224 [2024-12-05 21:24:42.583009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.224 [2024-12-05 21:24:42.583027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.224 qpair failed and we were unable to recover it. 00:31:41.224 [2024-12-05 21:24:42.592965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.224 [2024-12-05 21:24:42.593018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.224 [2024-12-05 21:24:42.593032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.224 [2024-12-05 21:24:42.593039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.224 [2024-12-05 21:24:42.593046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.224 [2024-12-05 21:24:42.593060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.224 qpair failed and we were unable to recover it. 00:31:41.224 [2024-12-05 21:24:42.603005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.224 [2024-12-05 21:24:42.603056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.224 [2024-12-05 21:24:42.603070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.224 [2024-12-05 21:24:42.603077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.224 [2024-12-05 21:24:42.603084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.224 [2024-12-05 21:24:42.603098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.224 qpair failed and we were unable to recover it. 00:31:41.224 [2024-12-05 21:24:42.612888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.224 [2024-12-05 21:24:42.612937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.224 [2024-12-05 21:24:42.612950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.224 [2024-12-05 21:24:42.612957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.224 [2024-12-05 21:24:42.612964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.224 [2024-12-05 21:24:42.612978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.224 qpair failed and we were unable to recover it. 00:31:41.224 [2024-12-05 21:24:42.623052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.224 [2024-12-05 21:24:42.623103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.224 [2024-12-05 21:24:42.623116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.224 [2024-12-05 21:24:42.623124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.224 [2024-12-05 21:24:42.623130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.224 [2024-12-05 21:24:42.623144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.224 qpair failed and we were unable to recover it. 00:31:41.224 [2024-12-05 21:24:42.633068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.224 [2024-12-05 21:24:42.633123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.224 [2024-12-05 21:24:42.633136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.224 [2024-12-05 21:24:42.633143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.224 [2024-12-05 21:24:42.633150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.224 [2024-12-05 21:24:42.633164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.224 qpair failed and we were unable to recover it. 00:31:41.224 [2024-12-05 21:24:42.643127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.224 [2024-12-05 21:24:42.643172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.224 [2024-12-05 21:24:42.643186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.224 [2024-12-05 21:24:42.643193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.224 [2024-12-05 21:24:42.643200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.224 [2024-12-05 21:24:42.643213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.224 qpair failed and we were unable to recover it. 00:31:41.224 [2024-12-05 21:24:42.653143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.224 [2024-12-05 21:24:42.653190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.224 [2024-12-05 21:24:42.653204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.224 [2024-12-05 21:24:42.653211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.224 [2024-12-05 21:24:42.653217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.224 [2024-12-05 21:24:42.653231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.224 qpair failed and we were unable to recover it. 00:31:41.486 [2024-12-05 21:24:42.663169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.486 [2024-12-05 21:24:42.663216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.486 [2024-12-05 21:24:42.663230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.486 [2024-12-05 21:24:42.663237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.486 [2024-12-05 21:24:42.663243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.486 [2024-12-05 21:24:42.663257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.486 qpair failed and we were unable to recover it. 00:31:41.486 [2024-12-05 21:24:42.673063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.486 [2024-12-05 21:24:42.673109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.486 [2024-12-05 21:24:42.673124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.486 [2024-12-05 21:24:42.673136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.486 [2024-12-05 21:24:42.673142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.486 [2024-12-05 21:24:42.673158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.487 qpair failed and we were unable to recover it. 00:31:41.487 [2024-12-05 21:24:42.683225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.487 [2024-12-05 21:24:42.683276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.487 [2024-12-05 21:24:42.683290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.487 [2024-12-05 21:24:42.683298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.487 [2024-12-05 21:24:42.683304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.487 [2024-12-05 21:24:42.683319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.487 qpair failed and we were unable to recover it. 00:31:41.487 [2024-12-05 21:24:42.693207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.487 [2024-12-05 21:24:42.693255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.487 [2024-12-05 21:24:42.693269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.487 [2024-12-05 21:24:42.693277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.487 [2024-12-05 21:24:42.693283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.487 [2024-12-05 21:24:42.693297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.487 qpair failed and we were unable to recover it. 00:31:41.487 [2024-12-05 21:24:42.703261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.487 [2024-12-05 21:24:42.703310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.487 [2024-12-05 21:24:42.703324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.487 [2024-12-05 21:24:42.703331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.487 [2024-12-05 21:24:42.703337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.487 [2024-12-05 21:24:42.703351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.487 qpair failed and we were unable to recover it. 00:31:41.487 [2024-12-05 21:24:42.713270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.487 [2024-12-05 21:24:42.713315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.487 [2024-12-05 21:24:42.713328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.487 [2024-12-05 21:24:42.713336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.487 [2024-12-05 21:24:42.713342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.487 [2024-12-05 21:24:42.713359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.487 qpair failed and we were unable to recover it. 00:31:41.487 [2024-12-05 21:24:42.723330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.487 [2024-12-05 21:24:42.723379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.487 [2024-12-05 21:24:42.723395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.487 [2024-12-05 21:24:42.723402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.487 [2024-12-05 21:24:42.723409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.487 [2024-12-05 21:24:42.723423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.487 qpair failed and we were unable to recover it. 00:31:41.487 [2024-12-05 21:24:42.733300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.487 [2024-12-05 21:24:42.733344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.487 [2024-12-05 21:24:42.733359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.487 [2024-12-05 21:24:42.733366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.487 [2024-12-05 21:24:42.733373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.487 [2024-12-05 21:24:42.733387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.487 qpair failed and we were unable to recover it. 00:31:41.487 [2024-12-05 21:24:42.743403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.487 [2024-12-05 21:24:42.743469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.487 [2024-12-05 21:24:42.743483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.487 [2024-12-05 21:24:42.743491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.487 [2024-12-05 21:24:42.743497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.487 [2024-12-05 21:24:42.743511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.487 qpair failed and we were unable to recover it. 00:31:41.487 [2024-12-05 21:24:42.753409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.487 [2024-12-05 21:24:42.753453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.487 [2024-12-05 21:24:42.753467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.487 [2024-12-05 21:24:42.753475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.487 [2024-12-05 21:24:42.753481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.487 [2024-12-05 21:24:42.753495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.487 qpair failed and we were unable to recover it. 00:31:41.487 [2024-12-05 21:24:42.763444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.487 [2024-12-05 21:24:42.763506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.487 [2024-12-05 21:24:42.763519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.487 [2024-12-05 21:24:42.763527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.487 [2024-12-05 21:24:42.763533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.487 [2024-12-05 21:24:42.763547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.487 qpair failed and we were unable to recover it. 00:31:41.487 [2024-12-05 21:24:42.773453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.487 [2024-12-05 21:24:42.773529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.487 [2024-12-05 21:24:42.773543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.487 [2024-12-05 21:24:42.773550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.487 [2024-12-05 21:24:42.773557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.487 [2024-12-05 21:24:42.773571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.487 qpair failed and we were unable to recover it. 00:31:41.487 [2024-12-05 21:24:42.783458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.487 [2024-12-05 21:24:42.783504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.487 [2024-12-05 21:24:42.783518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.487 [2024-12-05 21:24:42.783526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.487 [2024-12-05 21:24:42.783533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.487 [2024-12-05 21:24:42.783548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.487 qpair failed and we were unable to recover it. 00:31:41.487 [2024-12-05 21:24:42.793519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.487 [2024-12-05 21:24:42.793571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.487 [2024-12-05 21:24:42.793596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.487 [2024-12-05 21:24:42.793605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.487 [2024-12-05 21:24:42.793612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.487 [2024-12-05 21:24:42.793631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.487 qpair failed and we were unable to recover it. 00:31:41.487 [2024-12-05 21:24:42.803553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.487 [2024-12-05 21:24:42.803604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.487 [2024-12-05 21:24:42.803629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.487 [2024-12-05 21:24:42.803642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.487 [2024-12-05 21:24:42.803650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.488 [2024-12-05 21:24:42.803670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.488 qpair failed and we were unable to recover it. 00:31:41.488 [2024-12-05 21:24:42.813536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.488 [2024-12-05 21:24:42.813589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.488 [2024-12-05 21:24:42.813614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.488 [2024-12-05 21:24:42.813623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.488 [2024-12-05 21:24:42.813630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.488 [2024-12-05 21:24:42.813650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.488 qpair failed and we were unable to recover it. 00:31:41.488 [2024-12-05 21:24:42.823566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.488 [2024-12-05 21:24:42.823613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.488 [2024-12-05 21:24:42.823629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.488 [2024-12-05 21:24:42.823636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.488 [2024-12-05 21:24:42.823643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.488 [2024-12-05 21:24:42.823658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.488 qpair failed and we were unable to recover it. 00:31:41.488 [2024-12-05 21:24:42.833607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.488 [2024-12-05 21:24:42.833655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.488 [2024-12-05 21:24:42.833669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.488 [2024-12-05 21:24:42.833676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.488 [2024-12-05 21:24:42.833683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.488 [2024-12-05 21:24:42.833697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.488 qpair failed and we were unable to recover it. 00:31:41.488 [2024-12-05 21:24:42.843642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.488 [2024-12-05 21:24:42.843694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.488 [2024-12-05 21:24:42.843709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.488 [2024-12-05 21:24:42.843716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.488 [2024-12-05 21:24:42.843723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.488 [2024-12-05 21:24:42.843742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.488 qpair failed and we were unable to recover it. 00:31:41.488 [2024-12-05 21:24:42.853576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.488 [2024-12-05 21:24:42.853624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.488 [2024-12-05 21:24:42.853638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.488 [2024-12-05 21:24:42.853645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.488 [2024-12-05 21:24:42.853651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.488 [2024-12-05 21:24:42.853665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.488 qpair failed and we were unable to recover it. 00:31:41.488 [2024-12-05 21:24:42.863600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.488 [2024-12-05 21:24:42.863651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.488 [2024-12-05 21:24:42.863666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.488 [2024-12-05 21:24:42.863673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.488 [2024-12-05 21:24:42.863679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.488 [2024-12-05 21:24:42.863693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.488 qpair failed and we were unable to recover it. 00:31:41.488 [2024-12-05 21:24:42.873731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.488 [2024-12-05 21:24:42.873777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.488 [2024-12-05 21:24:42.873791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.488 [2024-12-05 21:24:42.873798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.488 [2024-12-05 21:24:42.873805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.488 [2024-12-05 21:24:42.873819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.488 qpair failed and we were unable to recover it. 00:31:41.488 [2024-12-05 21:24:42.883747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.488 [2024-12-05 21:24:42.883796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.488 [2024-12-05 21:24:42.883810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.488 [2024-12-05 21:24:42.883817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.488 [2024-12-05 21:24:42.883823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.488 [2024-12-05 21:24:42.883837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.488 qpair failed and we were unable to recover it. 00:31:41.488 [2024-12-05 21:24:42.893650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.488 [2024-12-05 21:24:42.893752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.488 [2024-12-05 21:24:42.893769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.488 [2024-12-05 21:24:42.893776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.488 [2024-12-05 21:24:42.893783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.488 [2024-12-05 21:24:42.893798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.488 qpair failed and we were unable to recover it. 00:31:41.488 [2024-12-05 21:24:42.903796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.488 [2024-12-05 21:24:42.903890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.488 [2024-12-05 21:24:42.903905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.488 [2024-12-05 21:24:42.903912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.488 [2024-12-05 21:24:42.903919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.488 [2024-12-05 21:24:42.903933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.488 qpair failed and we were unable to recover it. 00:31:41.488 [2024-12-05 21:24:42.913820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.488 [2024-12-05 21:24:42.913868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.488 [2024-12-05 21:24:42.913882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.488 [2024-12-05 21:24:42.913889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.488 [2024-12-05 21:24:42.913896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.488 [2024-12-05 21:24:42.913910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.488 qpair failed and we were unable to recover it. 00:31:41.770 [2024-12-05 21:24:42.923748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.770 [2024-12-05 21:24:42.923796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.770 [2024-12-05 21:24:42.923810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.770 [2024-12-05 21:24:42.923818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.770 [2024-12-05 21:24:42.923825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.770 [2024-12-05 21:24:42.923839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.770 qpair failed and we were unable to recover it. 00:31:41.770 [2024-12-05 21:24:42.933869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.770 [2024-12-05 21:24:42.933918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.770 [2024-12-05 21:24:42.933932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.770 [2024-12-05 21:24:42.933944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.770 [2024-12-05 21:24:42.933951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.770 [2024-12-05 21:24:42.933965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.770 qpair failed and we were unable to recover it. 00:31:41.770 [2024-12-05 21:24:42.943916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.770 [2024-12-05 21:24:42.943962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.770 [2024-12-05 21:24:42.943976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.770 [2024-12-05 21:24:42.943983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.770 [2024-12-05 21:24:42.943990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.770 [2024-12-05 21:24:42.944004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.770 qpair failed and we were unable to recover it. 00:31:41.770 [2024-12-05 21:24:42.953929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.770 [2024-12-05 21:24:42.953980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.770 [2024-12-05 21:24:42.953993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.770 [2024-12-05 21:24:42.954001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.770 [2024-12-05 21:24:42.954008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.770 [2024-12-05 21:24:42.954022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.770 qpair failed and we were unable to recover it. 00:31:41.770 [2024-12-05 21:24:42.963935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.771 [2024-12-05 21:24:42.963982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.771 [2024-12-05 21:24:42.963996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.771 [2024-12-05 21:24:42.964004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.771 [2024-12-05 21:24:42.964011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.771 [2024-12-05 21:24:42.964025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.771 qpair failed and we were unable to recover it. 00:31:41.771 [2024-12-05 21:24:42.974003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.771 [2024-12-05 21:24:42.974090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.771 [2024-12-05 21:24:42.974104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.771 [2024-12-05 21:24:42.974111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.771 [2024-12-05 21:24:42.974117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.771 [2024-12-05 21:24:42.974131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.771 qpair failed and we were unable to recover it. 00:31:41.771 [2024-12-05 21:24:42.983985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.771 [2024-12-05 21:24:42.984034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.771 [2024-12-05 21:24:42.984048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.771 [2024-12-05 21:24:42.984055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.771 [2024-12-05 21:24:42.984061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.771 [2024-12-05 21:24:42.984075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.771 qpair failed and we were unable to recover it. 00:31:41.771 [2024-12-05 21:24:42.993923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.771 [2024-12-05 21:24:42.993975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.771 [2024-12-05 21:24:42.993989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.771 [2024-12-05 21:24:42.993996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.771 [2024-12-05 21:24:42.994003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.771 [2024-12-05 21:24:42.994016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.771 qpair failed and we were unable to recover it. 00:31:41.771 [2024-12-05 21:24:43.004088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.771 [2024-12-05 21:24:43.004132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.771 [2024-12-05 21:24:43.004146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.771 [2024-12-05 21:24:43.004154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.771 [2024-12-05 21:24:43.004160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.771 [2024-12-05 21:24:43.004174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.771 qpair failed and we were unable to recover it. 00:31:41.771 [2024-12-05 21:24:43.014089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.771 [2024-12-05 21:24:43.014138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.771 [2024-12-05 21:24:43.014152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.771 [2024-12-05 21:24:43.014159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.771 [2024-12-05 21:24:43.014166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.771 [2024-12-05 21:24:43.014179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.771 qpair failed and we were unable to recover it. 00:31:41.771 [2024-12-05 21:24:43.024021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.771 [2024-12-05 21:24:43.024078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.771 [2024-12-05 21:24:43.024092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.771 [2024-12-05 21:24:43.024099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.771 [2024-12-05 21:24:43.024106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.771 [2024-12-05 21:24:43.024120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.771 qpair failed and we were unable to recover it. 00:31:41.771 [2024-12-05 21:24:43.034145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.771 [2024-12-05 21:24:43.034224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.771 [2024-12-05 21:24:43.034239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.771 [2024-12-05 21:24:43.034247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.771 [2024-12-05 21:24:43.034254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.771 [2024-12-05 21:24:43.034268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.771 qpair failed and we were unable to recover it. 00:31:41.771 [2024-12-05 21:24:43.044182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.771 [2024-12-05 21:24:43.044235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.771 [2024-12-05 21:24:43.044248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.771 [2024-12-05 21:24:43.044255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.771 [2024-12-05 21:24:43.044262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.771 [2024-12-05 21:24:43.044276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.771 qpair failed and we were unable to recover it. 00:31:41.771 [2024-12-05 21:24:43.054193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.771 [2024-12-05 21:24:43.054268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.771 [2024-12-05 21:24:43.054281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.771 [2024-12-05 21:24:43.054289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.771 [2024-12-05 21:24:43.054295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.771 [2024-12-05 21:24:43.054309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.771 qpair failed and we were unable to recover it. 00:31:41.771 [2024-12-05 21:24:43.064239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.771 [2024-12-05 21:24:43.064289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.771 [2024-12-05 21:24:43.064303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.771 [2024-12-05 21:24:43.064314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.771 [2024-12-05 21:24:43.064320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.771 [2024-12-05 21:24:43.064334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.771 qpair failed and we were unable to recover it. 00:31:41.771 [2024-12-05 21:24:43.074253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.771 [2024-12-05 21:24:43.074351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.771 [2024-12-05 21:24:43.074364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.771 [2024-12-05 21:24:43.074371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.771 [2024-12-05 21:24:43.074378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.771 [2024-12-05 21:24:43.074392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.771 qpair failed and we were unable to recover it. 00:31:41.771 [2024-12-05 21:24:43.084291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.771 [2024-12-05 21:24:43.084339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.771 [2024-12-05 21:24:43.084352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.771 [2024-12-05 21:24:43.084360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.771 [2024-12-05 21:24:43.084367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.771 [2024-12-05 21:24:43.084380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.771 qpair failed and we were unable to recover it. 00:31:41.771 [2024-12-05 21:24:43.094302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.771 [2024-12-05 21:24:43.094344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.772 [2024-12-05 21:24:43.094358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.772 [2024-12-05 21:24:43.094365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.772 [2024-12-05 21:24:43.094372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.772 [2024-12-05 21:24:43.094385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.772 qpair failed and we were unable to recover it. 00:31:41.772 [2024-12-05 21:24:43.104323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.772 [2024-12-05 21:24:43.104380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.772 [2024-12-05 21:24:43.104393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.772 [2024-12-05 21:24:43.104401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.772 [2024-12-05 21:24:43.104407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.772 [2024-12-05 21:24:43.104420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.772 qpair failed and we were unable to recover it. 00:31:41.772 [2024-12-05 21:24:43.114360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.772 [2024-12-05 21:24:43.114410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.772 [2024-12-05 21:24:43.114423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.772 [2024-12-05 21:24:43.114431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.772 [2024-12-05 21:24:43.114437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.772 [2024-12-05 21:24:43.114450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.772 qpair failed and we were unable to recover it. 00:31:41.772 [2024-12-05 21:24:43.124402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.772 [2024-12-05 21:24:43.124458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.772 [2024-12-05 21:24:43.124471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.772 [2024-12-05 21:24:43.124478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.772 [2024-12-05 21:24:43.124485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.772 [2024-12-05 21:24:43.124498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.772 qpair failed and we were unable to recover it. 00:31:41.772 [2024-12-05 21:24:43.134451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.772 [2024-12-05 21:24:43.134502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.772 [2024-12-05 21:24:43.134516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.772 [2024-12-05 21:24:43.134523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.772 [2024-12-05 21:24:43.134530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.772 [2024-12-05 21:24:43.134544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.772 qpair failed and we were unable to recover it. 00:31:41.772 [2024-12-05 21:24:43.144436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.772 [2024-12-05 21:24:43.144482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.772 [2024-12-05 21:24:43.144496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.772 [2024-12-05 21:24:43.144504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.772 [2024-12-05 21:24:43.144510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.772 [2024-12-05 21:24:43.144524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.772 qpair failed and we were unable to recover it. 00:31:41.772 [2024-12-05 21:24:43.154498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.772 [2024-12-05 21:24:43.154552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.772 [2024-12-05 21:24:43.154566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.772 [2024-12-05 21:24:43.154573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.772 [2024-12-05 21:24:43.154579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.772 [2024-12-05 21:24:43.154593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.772 qpair failed and we were unable to recover it. 00:31:41.772 [2024-12-05 21:24:43.164540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.772 [2024-12-05 21:24:43.164594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.772 [2024-12-05 21:24:43.164608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.772 [2024-12-05 21:24:43.164615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.772 [2024-12-05 21:24:43.164621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.772 [2024-12-05 21:24:43.164635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.772 qpair failed and we were unable to recover it. 00:31:41.772 [2024-12-05 21:24:43.174483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.772 [2024-12-05 21:24:43.174564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.772 [2024-12-05 21:24:43.174589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.772 [2024-12-05 21:24:43.174599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.772 [2024-12-05 21:24:43.174606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.772 [2024-12-05 21:24:43.174625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.772 qpair failed and we were unable to recover it. 00:31:41.772 [2024-12-05 21:24:43.184424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.772 [2024-12-05 21:24:43.184479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.772 [2024-12-05 21:24:43.184494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.772 [2024-12-05 21:24:43.184502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.772 [2024-12-05 21:24:43.184508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:41.772 [2024-12-05 21:24:43.184523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.772 qpair failed and we were unable to recover it. 00:31:42.071 [2024-12-05 21:24:43.194579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.071 [2024-12-05 21:24:43.194628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.071 [2024-12-05 21:24:43.194642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.071 [2024-12-05 21:24:43.194655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.071 [2024-12-05 21:24:43.194661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.071 [2024-12-05 21:24:43.194676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.071 qpair failed and we were unable to recover it. 00:31:42.071 [2024-12-05 21:24:43.204609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.071 [2024-12-05 21:24:43.204661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.071 [2024-12-05 21:24:43.204686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.071 [2024-12-05 21:24:43.204695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.071 [2024-12-05 21:24:43.204703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.071 [2024-12-05 21:24:43.204722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.071 qpair failed and we were unable to recover it. 00:31:42.071 [2024-12-05 21:24:43.214695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.071 [2024-12-05 21:24:43.214740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.071 [2024-12-05 21:24:43.214755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.071 [2024-12-05 21:24:43.214763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.071 [2024-12-05 21:24:43.214769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.071 [2024-12-05 21:24:43.214785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.071 qpair failed and we were unable to recover it. 00:31:42.071 [2024-12-05 21:24:43.224655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.071 [2024-12-05 21:24:43.224732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.071 [2024-12-05 21:24:43.224746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.071 [2024-12-05 21:24:43.224753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.071 [2024-12-05 21:24:43.224760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.071 [2024-12-05 21:24:43.224775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.071 qpair failed and we were unable to recover it. 00:31:42.071 [2024-12-05 21:24:43.234686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.071 [2024-12-05 21:24:43.234731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.071 [2024-12-05 21:24:43.234746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.071 [2024-12-05 21:24:43.234753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.071 [2024-12-05 21:24:43.234760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.071 [2024-12-05 21:24:43.234774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.071 qpair failed and we were unable to recover it. 00:31:42.071 [2024-12-05 21:24:43.244640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.071 [2024-12-05 21:24:43.244690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.071 [2024-12-05 21:24:43.244704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.071 [2024-12-05 21:24:43.244712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.071 [2024-12-05 21:24:43.244719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.071 [2024-12-05 21:24:43.244733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.071 qpair failed and we were unable to recover it. 00:31:42.071 [2024-12-05 21:24:43.254604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.071 [2024-12-05 21:24:43.254649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.071 [2024-12-05 21:24:43.254663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.071 [2024-12-05 21:24:43.254671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.071 [2024-12-05 21:24:43.254677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.071 [2024-12-05 21:24:43.254691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.071 qpair failed and we were unable to recover it. 00:31:42.071 [2024-12-05 21:24:43.264740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.071 [2024-12-05 21:24:43.264787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.071 [2024-12-05 21:24:43.264801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.071 [2024-12-05 21:24:43.264808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.071 [2024-12-05 21:24:43.264815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.071 [2024-12-05 21:24:43.264829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.071 qpair failed and we were unable to recover it. 00:31:42.072 [2024-12-05 21:24:43.274773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.072 [2024-12-05 21:24:43.274829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.072 [2024-12-05 21:24:43.274843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.072 [2024-12-05 21:24:43.274850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.072 [2024-12-05 21:24:43.274857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.072 [2024-12-05 21:24:43.274875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.072 qpair failed and we were unable to recover it. 00:31:42.072 [2024-12-05 21:24:43.284876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.072 [2024-12-05 21:24:43.284955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.072 [2024-12-05 21:24:43.284970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.072 [2024-12-05 21:24:43.284978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.072 [2024-12-05 21:24:43.284986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.072 [2024-12-05 21:24:43.285001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.072 qpair failed and we were unable to recover it. 00:31:42.072 [2024-12-05 21:24:43.294710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.072 [2024-12-05 21:24:43.294761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.072 [2024-12-05 21:24:43.294774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.072 [2024-12-05 21:24:43.294782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.072 [2024-12-05 21:24:43.294788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.072 [2024-12-05 21:24:43.294802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.072 qpair failed and we were unable to recover it. 00:31:42.072 [2024-12-05 21:24:43.304871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.072 [2024-12-05 21:24:43.304958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.072 [2024-12-05 21:24:43.304972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.072 [2024-12-05 21:24:43.304979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.072 [2024-12-05 21:24:43.304986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.072 [2024-12-05 21:24:43.305000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.072 qpair failed and we were unable to recover it. 00:31:42.072 [2024-12-05 21:24:43.314931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.072 [2024-12-05 21:24:43.314980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.072 [2024-12-05 21:24:43.314993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.072 [2024-12-05 21:24:43.315001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.072 [2024-12-05 21:24:43.315007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.072 [2024-12-05 21:24:43.315021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.072 qpair failed and we were unable to recover it. 00:31:42.072 [2024-12-05 21:24:43.324981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.072 [2024-12-05 21:24:43.325056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.072 [2024-12-05 21:24:43.325069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.072 [2024-12-05 21:24:43.325080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.072 [2024-12-05 21:24:43.325087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.072 [2024-12-05 21:24:43.325102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.072 qpair failed and we were unable to recover it. 00:31:42.072 [2024-12-05 21:24:43.334950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.072 [2024-12-05 21:24:43.334993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.072 [2024-12-05 21:24:43.335007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.072 [2024-12-05 21:24:43.335014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.072 [2024-12-05 21:24:43.335020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.072 [2024-12-05 21:24:43.335035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.072 qpair failed and we were unable to recover it. 00:31:42.072 [2024-12-05 21:24:43.344944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.072 [2024-12-05 21:24:43.344994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.072 [2024-12-05 21:24:43.345008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.072 [2024-12-05 21:24:43.345015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.072 [2024-12-05 21:24:43.345021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.072 [2024-12-05 21:24:43.345036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.072 qpair failed and we were unable to recover it. 00:31:42.072 [2024-12-05 21:24:43.354983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.072 [2024-12-05 21:24:43.355030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.072 [2024-12-05 21:24:43.355044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.072 [2024-12-05 21:24:43.355052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.072 [2024-12-05 21:24:43.355058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.072 [2024-12-05 21:24:43.355073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.072 qpair failed and we were unable to recover it. 00:31:42.072 [2024-12-05 21:24:43.365046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.072 [2024-12-05 21:24:43.365096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.072 [2024-12-05 21:24:43.365110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.072 [2024-12-05 21:24:43.365118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.072 [2024-12-05 21:24:43.365125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.072 [2024-12-05 21:24:43.365139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.072 qpair failed and we were unable to recover it. 00:31:42.072 [2024-12-05 21:24:43.375106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.072 [2024-12-05 21:24:43.375154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.072 [2024-12-05 21:24:43.375168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.072 [2024-12-05 21:24:43.375175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.072 [2024-12-05 21:24:43.375182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.072 [2024-12-05 21:24:43.375196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.072 qpair failed and we were unable to recover it. 00:31:42.072 [2024-12-05 21:24:43.385080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.072 [2024-12-05 21:24:43.385129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.072 [2024-12-05 21:24:43.385143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.072 [2024-12-05 21:24:43.385150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.072 [2024-12-05 21:24:43.385157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.072 [2024-12-05 21:24:43.385171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.072 qpair failed and we were unable to recover it. 00:31:42.072 [2024-12-05 21:24:43.395110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.072 [2024-12-05 21:24:43.395157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.072 [2024-12-05 21:24:43.395171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.072 [2024-12-05 21:24:43.395179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.072 [2024-12-05 21:24:43.395185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.072 [2024-12-05 21:24:43.395199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.072 qpair failed and we were unable to recover it. 00:31:42.073 [2024-12-05 21:24:43.405143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.073 [2024-12-05 21:24:43.405204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.073 [2024-12-05 21:24:43.405217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.073 [2024-12-05 21:24:43.405224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.073 [2024-12-05 21:24:43.405231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.073 [2024-12-05 21:24:43.405244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.073 qpair failed and we were unable to recover it. 00:31:42.073 [2024-12-05 21:24:43.415201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.073 [2024-12-05 21:24:43.415248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.073 [2024-12-05 21:24:43.415262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.073 [2024-12-05 21:24:43.415269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.073 [2024-12-05 21:24:43.415276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.073 [2024-12-05 21:24:43.415289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.073 qpair failed and we were unable to recover it. 00:31:42.073 [2024-12-05 21:24:43.425180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.073 [2024-12-05 21:24:43.425264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.073 [2024-12-05 21:24:43.425278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.073 [2024-12-05 21:24:43.425285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.073 [2024-12-05 21:24:43.425292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.073 [2024-12-05 21:24:43.425306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.073 qpair failed and we were unable to recover it. 00:31:42.073 [2024-12-05 21:24:43.435096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.073 [2024-12-05 21:24:43.435143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.073 [2024-12-05 21:24:43.435157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.073 [2024-12-05 21:24:43.435165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.073 [2024-12-05 21:24:43.435171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.073 [2024-12-05 21:24:43.435185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.073 qpair failed and we were unable to recover it. 00:31:42.073 [2024-12-05 21:24:43.445247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.073 [2024-12-05 21:24:43.445299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.073 [2024-12-05 21:24:43.445313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.073 [2024-12-05 21:24:43.445320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.073 [2024-12-05 21:24:43.445327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.073 [2024-12-05 21:24:43.445341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.073 qpair failed and we were unable to recover it. 00:31:42.073 [2024-12-05 21:24:43.455291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.073 [2024-12-05 21:24:43.455379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.073 [2024-12-05 21:24:43.455394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.073 [2024-12-05 21:24:43.455405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.073 [2024-12-05 21:24:43.455412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.073 [2024-12-05 21:24:43.455426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.073 qpair failed and we were unable to recover it. 00:31:42.073 [2024-12-05 21:24:43.465172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.073 [2024-12-05 21:24:43.465218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.073 [2024-12-05 21:24:43.465232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.073 [2024-12-05 21:24:43.465239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.073 [2024-12-05 21:24:43.465246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.073 [2024-12-05 21:24:43.465260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.073 qpair failed and we were unable to recover it. 00:31:42.073 [2024-12-05 21:24:43.475219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.073 [2024-12-05 21:24:43.475268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.073 [2024-12-05 21:24:43.475284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.073 [2024-12-05 21:24:43.475292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.073 [2024-12-05 21:24:43.475298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.073 [2024-12-05 21:24:43.475317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.073 qpair failed and we were unable to recover it. 00:31:42.073 [2024-12-05 21:24:43.485345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.073 [2024-12-05 21:24:43.485396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.073 [2024-12-05 21:24:43.485410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.073 [2024-12-05 21:24:43.485417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.073 [2024-12-05 21:24:43.485424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.073 [2024-12-05 21:24:43.485438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.073 qpair failed and we were unable to recover it. 00:31:42.073 [2024-12-05 21:24:43.495418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.073 [2024-12-05 21:24:43.495464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.073 [2024-12-05 21:24:43.495478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.073 [2024-12-05 21:24:43.495486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.073 [2024-12-05 21:24:43.495492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.073 [2024-12-05 21:24:43.495506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.073 qpair failed and we were unable to recover it. 00:31:42.337 [2024-12-05 21:24:43.505411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.337 [2024-12-05 21:24:43.505460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.337 [2024-12-05 21:24:43.505474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.337 [2024-12-05 21:24:43.505481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.337 [2024-12-05 21:24:43.505488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf72490 00:31:42.337 [2024-12-05 21:24:43.505501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.337 qpair failed and we were unable to recover it. 00:31:42.337 [2024-12-05 21:24:43.515420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.337 [2024-12-05 21:24:43.515577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.337 [2024-12-05 21:24:43.515641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.337 [2024-12-05 21:24:43.515668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.337 [2024-12-05 21:24:43.515690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:31:42.337 [2024-12-05 21:24:43.515747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:42.337 qpair failed and we were unable to recover it. 00:31:42.337 [2024-12-05 21:24:43.525518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.338 [2024-12-05 21:24:43.525630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.338 [2024-12-05 21:24:43.525660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.338 [2024-12-05 21:24:43.525677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.338 [2024-12-05 21:24:43.525693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30f8000b90 00:31:42.338 [2024-12-05 21:24:43.525724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:42.338 qpair failed and we were unable to recover it. 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Write completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Write completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Write completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Write completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Write completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Write completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Write completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Write completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Write completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Write completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 Read completed with error (sct=0, sc=8) 00:31:42.338 starting I/O failed 00:31:42.338 [2024-12-05 21:24:43.526609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.338 [2024-12-05 21:24:43.535508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.338 [2024-12-05 21:24:43.535611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.338 [2024-12-05 21:24:43.535677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.338 [2024-12-05 21:24:43.535703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.338 [2024-12-05 21:24:43.535726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3104000b90 00:31:42.338 [2024-12-05 21:24:43.535782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.338 qpair failed and we were unable to recover it. 00:31:42.338 [2024-12-05 21:24:43.545521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.338 [2024-12-05 21:24:43.545599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.338 [2024-12-05 21:24:43.545648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.338 [2024-12-05 21:24:43.545667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.338 [2024-12-05 21:24:43.545683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3104000b90 00:31:42.338 [2024-12-05 21:24:43.545724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.338 qpair failed and we were unable to recover it. 00:31:42.338 [2024-12-05 21:24:43.555569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.338 [2024-12-05 21:24:43.555619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.338 [2024-12-05 21:24:43.555638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.338 [2024-12-05 21:24:43.555644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.338 [2024-12-05 21:24:43.555650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30fc000b90 00:31:42.338 [2024-12-05 21:24:43.555664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.338 qpair failed and we were unable to recover it. 00:31:42.338 [2024-12-05 21:24:43.565587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.338 [2024-12-05 21:24:43.565636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.338 [2024-12-05 21:24:43.565655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.338 [2024-12-05 21:24:43.565661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.338 [2024-12-05 21:24:43.565666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f30fc000b90 00:31:42.338 [2024-12-05 21:24:43.565678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.338 qpair failed and we were unable to recover it. 00:31:42.338 [2024-12-05 21:24:43.565848] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:31:42.338 A controller has encountered a failure and is being reset. 00:31:42.338 [2024-12-05 21:24:43.566002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6f030 (9): Bad file descriptor 00:31:42.338 Controller properly reset. 00:31:42.338 Initializing NVMe Controllers 00:31:42.338 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:42.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:42.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:42.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:42.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:42.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:42.338 Initialization complete. Launching workers. 00:31:42.338 Starting thread on core 1 00:31:42.338 Starting thread on core 2 00:31:42.338 Starting thread on core 3 00:31:42.338 Starting thread on core 0 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:42.338 00:31:42.338 real 0m11.323s 00:31:42.338 user 0m21.862s 00:31:42.338 sys 0m3.634s 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:42.338 ************************************ 00:31:42.338 END TEST nvmf_target_disconnect_tc2 00:31:42.338 ************************************ 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:42.338 rmmod nvme_tcp 00:31:42.338 rmmod nvme_fabrics 00:31:42.338 rmmod nvme_keyring 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2304460 ']' 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2304460 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2304460 ']' 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2304460 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:42.338 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2304460 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2304460' 00:31:42.600 killing process with pid 2304460 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2304460 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2304460 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:42.600 21:24:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.154 21:24:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:45.154 00:31:45.154 real 0m22.757s 00:31:45.154 user 0m49.699s 00:31:45.154 sys 0m10.532s 00:31:45.154 21:24:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:45.154 21:24:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:45.154 ************************************ 00:31:45.154 END TEST nvmf_target_disconnect 00:31:45.154 ************************************ 00:31:45.154 21:24:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:45.154 00:31:45.154 real 6m47.830s 00:31:45.154 user 11m33.441s 00:31:45.154 sys 2m25.005s 00:31:45.154 21:24:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:45.154 21:24:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.154 ************************************ 00:31:45.154 END TEST nvmf_host 00:31:45.154 ************************************ 00:31:45.154 21:24:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:31:45.154 21:24:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:31:45.154 21:24:46 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:45.154 21:24:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:45.154 21:24:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:45.154 21:24:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:45.154 ************************************ 00:31:45.154 START TEST nvmf_target_core_interrupt_mode 00:31:45.154 ************************************ 00:31:45.154 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:45.154 * Looking for test storage... 00:31:45.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:45.154 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:45.154 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:31:45.154 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:45.154 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:45.154 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:45.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.155 --rc genhtml_branch_coverage=1 00:31:45.155 --rc genhtml_function_coverage=1 00:31:45.155 --rc genhtml_legend=1 00:31:45.155 --rc geninfo_all_blocks=1 00:31:45.155 --rc geninfo_unexecuted_blocks=1 00:31:45.155 00:31:45.155 ' 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:45.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.155 --rc genhtml_branch_coverage=1 00:31:45.155 --rc genhtml_function_coverage=1 00:31:45.155 --rc genhtml_legend=1 00:31:45.155 --rc geninfo_all_blocks=1 00:31:45.155 --rc geninfo_unexecuted_blocks=1 00:31:45.155 00:31:45.155 ' 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:45.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.155 --rc genhtml_branch_coverage=1 00:31:45.155 --rc genhtml_function_coverage=1 00:31:45.155 --rc genhtml_legend=1 00:31:45.155 --rc geninfo_all_blocks=1 00:31:45.155 --rc geninfo_unexecuted_blocks=1 00:31:45.155 00:31:45.155 ' 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:45.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.155 --rc genhtml_branch_coverage=1 00:31:45.155 --rc genhtml_function_coverage=1 00:31:45.155 --rc genhtml_legend=1 00:31:45.155 --rc geninfo_all_blocks=1 00:31:45.155 --rc geninfo_unexecuted_blocks=1 00:31:45.155 00:31:45.155 ' 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:45.155 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:45.155 ************************************ 00:31:45.155 START TEST nvmf_abort 00:31:45.155 ************************************ 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:45.156 * Looking for test storage... 00:31:45.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:45.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.156 --rc genhtml_branch_coverage=1 00:31:45.156 --rc genhtml_function_coverage=1 00:31:45.156 --rc genhtml_legend=1 00:31:45.156 --rc geninfo_all_blocks=1 00:31:45.156 --rc geninfo_unexecuted_blocks=1 00:31:45.156 00:31:45.156 ' 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:45.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.156 --rc genhtml_branch_coverage=1 00:31:45.156 --rc genhtml_function_coverage=1 00:31:45.156 --rc genhtml_legend=1 00:31:45.156 --rc geninfo_all_blocks=1 00:31:45.156 --rc geninfo_unexecuted_blocks=1 00:31:45.156 00:31:45.156 ' 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:45.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.156 --rc genhtml_branch_coverage=1 00:31:45.156 --rc genhtml_function_coverage=1 00:31:45.156 --rc genhtml_legend=1 00:31:45.156 --rc geninfo_all_blocks=1 00:31:45.156 --rc geninfo_unexecuted_blocks=1 00:31:45.156 00:31:45.156 ' 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:45.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.156 --rc genhtml_branch_coverage=1 00:31:45.156 --rc genhtml_function_coverage=1 00:31:45.156 --rc genhtml_legend=1 00:31:45.156 --rc geninfo_all_blocks=1 00:31:45.156 --rc geninfo_unexecuted_blocks=1 00:31:45.156 00:31:45.156 ' 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.156 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.417 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:31:45.418 21:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:53.563 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:53.563 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:53.563 Found net devices under 0000:31:00.0: cvl_0_0 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:53.563 Found net devices under 0000:31:00.1: cvl_0_1 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:53.563 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:53.564 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:53.564 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:53.564 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:53.564 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:53.564 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:53.564 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:53.564 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:53.564 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:53.564 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:53.564 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:53.564 21:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:53.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:53.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:31:53.826 00:31:53.826 --- 10.0.0.2 ping statistics --- 00:31:53.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.826 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:53.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:53.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:31:53.826 00:31:53.826 --- 10.0.0.1 ping statistics --- 00:31:53.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.826 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2310572 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2310572 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2310572 ']' 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:53.826 21:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:53.826 [2024-12-05 21:24:55.213512] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:53.826 [2024-12-05 21:24:55.214697] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:31:53.826 [2024-12-05 21:24:55.214748] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.086 [2024-12-05 21:24:55.308486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:54.086 [2024-12-05 21:24:55.366934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.086 [2024-12-05 21:24:55.367000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.086 [2024-12-05 21:24:55.367011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.086 [2024-12-05 21:24:55.367017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.086 [2024-12-05 21:24:55.367024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.086 [2024-12-05 21:24:55.369542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.086 [2024-12-05 21:24:55.369711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:54.086 [2024-12-05 21:24:55.369716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.086 [2024-12-05 21:24:55.445349] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:54.086 [2024-12-05 21:24:55.445417] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:54.086 [2024-12-05 21:24:55.446079] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:54.086 [2024-12-05 21:24:55.446344] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:54.657 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.657 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:31:54.657 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:54.657 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:54.657 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:54.917 [2024-12-05 21:24:56.126837] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:54.917 Malloc0 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:54.917 Delay0 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:54.917 [2024-12-05 21:24:56.226773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.917 21:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:31:55.177 [2024-12-05 21:24:56.392052] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:57.089 Initializing NVMe Controllers 00:31:57.089 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:57.089 controller IO queue size 128 less than required 00:31:57.089 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:57.089 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:57.089 Initialization complete. Launching workers. 00:31:57.089 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29116 00:31:57.089 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29173, failed to submit 66 00:31:57.089 success 29116, unsuccessful 57, failed 0 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:57.089 rmmod nvme_tcp 00:31:57.089 rmmod nvme_fabrics 00:31:57.089 rmmod nvme_keyring 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2310572 ']' 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2310572 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2310572 ']' 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2310572 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:31:57.089 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2310572 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2310572' 00:31:57.351 killing process with pid 2310572 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2310572 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2310572 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.351 21:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.897 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:59.897 00:31:59.897 real 0m14.440s 00:31:59.897 user 0m11.161s 00:31:59.897 sys 0m7.734s 00:31:59.897 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:59.897 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:59.897 ************************************ 00:31:59.897 END TEST nvmf_abort 00:31:59.897 ************************************ 00:31:59.897 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:59.897 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:59.897 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:59.897 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:59.897 ************************************ 00:31:59.897 START TEST nvmf_ns_hotplug_stress 00:31:59.897 ************************************ 00:31:59.897 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:59.897 * Looking for test storage... 00:31:59.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:59.897 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:59.897 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:31:59.898 21:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:59.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.898 --rc genhtml_branch_coverage=1 00:31:59.898 --rc genhtml_function_coverage=1 00:31:59.898 --rc genhtml_legend=1 00:31:59.898 --rc geninfo_all_blocks=1 00:31:59.898 --rc geninfo_unexecuted_blocks=1 00:31:59.898 00:31:59.898 ' 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:59.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.898 --rc genhtml_branch_coverage=1 00:31:59.898 --rc genhtml_function_coverage=1 00:31:59.898 --rc genhtml_legend=1 00:31:59.898 --rc geninfo_all_blocks=1 00:31:59.898 --rc geninfo_unexecuted_blocks=1 00:31:59.898 00:31:59.898 ' 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:59.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.898 --rc genhtml_branch_coverage=1 00:31:59.898 --rc genhtml_function_coverage=1 00:31:59.898 --rc genhtml_legend=1 00:31:59.898 --rc geninfo_all_blocks=1 00:31:59.898 --rc geninfo_unexecuted_blocks=1 00:31:59.898 00:31:59.898 ' 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:59.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.898 --rc genhtml_branch_coverage=1 00:31:59.898 --rc genhtml_function_coverage=1 00:31:59.898 --rc genhtml_legend=1 00:31:59.898 --rc geninfo_all_blocks=1 00:31:59.898 --rc geninfo_unexecuted_blocks=1 00:31:59.898 00:31:59.898 ' 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:59.898 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:31:59.899 21:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:08.045 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:08.045 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:08.045 Found net devices under 0000:31:00.0: cvl_0_0 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:08.045 Found net devices under 0000:31:00.1: cvl_0_1 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:08.045 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:08.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.741 ms 00:32:08.046 00:32:08.046 --- 10.0.0.2 ping statistics --- 00:32:08.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.046 rtt min/avg/max/mdev = 0.741/0.741/0.741/0.000 ms 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:32:08.046 00:32:08.046 --- 10.0.0.1 ping statistics --- 00:32:08.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.046 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:08.046 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:08.307 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:32:08.307 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:08.307 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:08.307 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:08.307 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2315920 00:32:08.308 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2315920 00:32:08.308 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:08.308 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2315920 ']' 00:32:08.308 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.308 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:08.308 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.308 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:08.308 21:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:08.308 [2024-12-05 21:25:09.544920] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:08.308 [2024-12-05 21:25:09.545910] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:32:08.308 [2024-12-05 21:25:09.545946] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.308 [2024-12-05 21:25:09.648952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:08.308 [2024-12-05 21:25:09.689647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.308 [2024-12-05 21:25:09.689692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.308 [2024-12-05 21:25:09.689701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:08.308 [2024-12-05 21:25:09.689713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:08.308 [2024-12-05 21:25:09.689719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.308 [2024-12-05 21:25:09.691303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:08.308 [2024-12-05 21:25:09.691467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.308 [2024-12-05 21:25:09.691467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:08.569 [2024-12-05 21:25:09.759435] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:08.569 [2024-12-05 21:25:09.759500] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:08.569 [2024-12-05 21:25:09.760209] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:08.569 [2024-12-05 21:25:09.760452] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:09.141 21:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:09.141 21:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:32:09.141 21:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:09.141 21:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:09.141 21:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:09.141 21:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.141 21:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:32:09.141 21:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:09.141 [2024-12-05 21:25:10.536337] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.141 21:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:09.402 21:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.663 [2024-12-05 21:25:10.901054] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.663 21:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:09.924 21:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:32:09.924 Malloc0 00:32:09.924 21:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:10.184 Delay0 00:32:10.184 21:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:10.445 21:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:32:10.445 NULL1 00:32:10.706 21:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:32:10.707 21:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2316306 00:32:10.707 21:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:10.707 21:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:32:10.707 21:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:12.094 Read completed with error (sct=0, sc=11) 00:32:12.094 21:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:12.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:12.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:12.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:12.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:12.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:12.094 21:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:32:12.094 21:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:32:12.354 true 00:32:12.354 21:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:12.354 21:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.296 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:13.296 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:32:13.296 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:32:13.556 true 00:32:13.556 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:13.556 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.556 21:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:13.819 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:32:13.819 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:32:14.079 true 00:32:14.079 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:14.079 21:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:15.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:15.021 21:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:15.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:15.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:15.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:15.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:15.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:15.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:15.282 21:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:32:15.282 21:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:32:15.543 true 00:32:15.543 21:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:15.543 21:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.484 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:16.484 21:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:16.484 21:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:32:16.484 21:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:32:16.745 true 00:32:16.745 21:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:16.745 21:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.745 21:25:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:17.006 21:25:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:32:17.006 21:25:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:32:17.266 true 00:32:17.266 21:25:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:17.266 21:25:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.528 21:25:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:17.528 21:25:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:32:17.528 21:25:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:32:17.789 true 00:32:17.789 21:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:17.789 21:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.050 21:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:18.050 21:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:32:18.050 21:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:32:18.311 true 00:32:18.311 21:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:18.311 21:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:19.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.698 21:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:19.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.698 21:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:32:19.698 21:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:32:19.698 true 00:32:19.960 21:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:19.960 21:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:20.531 21:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:20.792 21:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:32:20.792 21:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:32:21.052 true 00:32:21.052 21:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:21.052 21:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:21.312 21:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:21.312 21:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:32:21.312 21:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:32:21.572 true 00:32:21.572 21:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:21.572 21:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:22.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:22.959 21:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:22.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:22.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:22.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:22.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:22.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:22.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:22.959 21:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:32:22.959 21:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:32:23.218 true 00:32:23.218 21:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:23.218 21:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:24.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:24.155 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:24.155 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:32:24.155 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:32:24.414 true 00:32:24.414 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:24.414 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:24.414 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:24.674 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:32:24.674 21:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:32:24.934 true 00:32:24.934 21:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:24.934 21:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:24.934 21:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:25.194 21:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:32:25.194 21:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:32:25.454 true 00:32:25.454 21:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:25.454 21:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:25.714 21:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:25.714 21:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:32:25.714 21:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:32:25.974 true 00:32:25.974 21:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:25.974 21:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.234 21:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:26.234 21:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:32:26.234 21:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:26.495 true 00:32:26.495 21:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:26.495 21:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.758 21:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:26.758 21:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:32:26.758 21:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:27.019 true 00:32:27.019 21:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:27.019 21:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:28.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.403 21:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:28.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:28.403 21:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:32:28.403 21:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:32:28.664 true 00:32:28.664 21:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:28.664 21:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:29.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:29.605 21:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:29.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:29.606 21:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:32:29.606 21:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:32:29.866 true 00:32:29.866 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:29.866 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:30.126 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:30.126 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:32:30.126 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:32:30.387 true 00:32:30.387 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:30.387 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:30.649 21:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:30.649 21:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:32:30.649 21:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:32:30.911 true 00:32:30.911 21:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:30.911 21:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:31.171 21:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:31.171 21:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:32:31.171 21:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:32:31.431 true 00:32:31.431 21:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:31.431 21:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:32.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:32.814 21:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:32.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:32.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:32.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:32.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:32.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:32.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:32.814 21:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:32:32.814 21:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:32:33.073 true 00:32:33.073 21:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:33.073 21:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:34.011 21:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:34.012 21:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:32:34.012 21:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:32:34.012 true 00:32:34.272 21:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:34.272 21:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:34.272 21:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:34.532 21:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:32:34.532 21:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:32:34.792 true 00:32:34.792 21:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:34.792 21:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:35.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.735 21:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:35.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:35.996 21:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:32:35.996 21:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:32:36.256 true 00:32:36.256 21:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:36.256 21:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:37.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:37.198 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:37.198 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:32:37.198 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:32:37.459 true 00:32:37.459 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:37.459 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:37.459 21:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:37.720 21:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:32:37.720 21:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:32:37.981 true 00:32:37.981 21:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:37.981 21:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:37.981 21:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:38.241 21:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:32:38.241 21:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:32:38.500 true 00:32:38.500 21:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:38.500 21:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:38.759 21:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:38.759 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:32:38.759 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:32:39.017 true 00:32:39.017 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:39.017 21:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:40.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.411 21:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:40.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:40.411 21:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:32:40.411 21:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:32:40.707 true 00:32:40.707 21:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:40.707 21:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:41.318 21:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:41.318 Initializing NVMe Controllers 00:32:41.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:41.318 Controller IO queue size 128, less than required. 00:32:41.318 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:41.318 Controller IO queue size 128, less than required. 00:32:41.318 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:41.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:41.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:41.318 Initialization complete. Launching workers. 00:32:41.318 ======================================================== 00:32:41.318 Latency(us) 00:32:41.318 Device Information : IOPS MiB/s Average min max 00:32:41.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2102.35 1.03 37112.84 1991.67 1098999.73 00:32:41.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17670.23 8.63 7243.48 1515.22 402735.58 00:32:41.318 ======================================================== 00:32:41.318 Total : 19772.58 9.65 10419.39 1515.22 1098999.73 00:32:41.318 00:32:41.579 21:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:32:41.579 21:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:32:41.839 true 00:32:41.839 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2316306 00:32:41.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2316306) - No such process 00:32:41.839 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2316306 00:32:41.840 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:41.840 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:42.100 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:32:42.100 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:32:42.100 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:32:42.100 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:42.100 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:32:42.361 null0 00:32:42.361 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:42.361 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:42.361 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:32:42.361 null1 00:32:42.361 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:42.361 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:42.361 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:32:42.623 null2 00:32:42.623 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:42.623 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:42.623 21:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:32:42.623 null3 00:32:42.884 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:42.884 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:42.884 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:32:42.884 null4 00:32:42.884 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:42.884 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:42.884 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:32:43.145 null5 00:32:43.145 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:43.145 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:43.145 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:32:43.145 null6 00:32:43.145 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:43.145 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:43.145 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:32:43.407 null7 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.407 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2322678 2322681 2322683 2322685 2322688 2322691 2322693 2322695 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.408 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:43.669 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:43.669 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:43.669 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:43.669 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:43.669 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:43.669 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:43.669 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:43.669 21:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:43.669 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.669 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.669 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:43.669 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.669 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.669 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:43.669 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.669 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.669 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:43.929 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:44.190 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.190 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.190 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:44.190 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.190 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:44.191 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:44.451 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:44.451 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:44.451 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:44.451 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:44.451 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:44.451 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.451 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.451 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:44.451 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.451 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.451 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:44.452 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.452 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.452 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:44.452 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.452 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.452 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:44.452 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.452 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.452 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:44.713 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.713 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.713 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:44.713 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.713 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.713 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:44.714 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.714 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.714 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:44.714 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:44.714 21:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:44.714 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:44.714 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:44.714 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:44.714 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:44.714 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:44.714 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:44.714 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.714 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.714 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:44.975 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:45.238 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:45.501 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:45.763 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.763 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.763 21:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:45.763 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.763 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.763 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:45.763 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.763 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.763 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:45.763 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:45.763 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.764 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.764 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:45.764 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.764 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.764 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:45.764 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:45.764 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:45.764 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:45.764 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:45.764 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:45.764 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.026 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.288 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:46.550 21:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:46.812 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.073 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.333 rmmod nvme_tcp 00:32:47.333 rmmod nvme_fabrics 00:32:47.333 rmmod nvme_keyring 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2315920 ']' 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2315920 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2315920 ']' 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2315920 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2315920 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2315920' 00:32:47.333 killing process with pid 2315920 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2315920 00:32:47.333 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2315920 00:32:47.594 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:47.594 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:47.594 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:47.594 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:47.594 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:32:47.594 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:47.594 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:32:47.594 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:47.594 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:47.594 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.594 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.594 21:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.143 21:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:50.143 00:32:50.143 real 0m50.052s 00:32:50.143 user 3m0.984s 00:32:50.143 sys 0m21.504s 00:32:50.143 21:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.143 21:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:50.143 ************************************ 00:32:50.143 END TEST nvmf_ns_hotplug_stress 00:32:50.143 ************************************ 00:32:50.143 21:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:50.143 21:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:50.143 21:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.143 21:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:50.143 ************************************ 00:32:50.143 START TEST nvmf_delete_subsystem 00:32:50.143 ************************************ 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:50.143 * Looking for test storage... 00:32:50.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:50.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.143 --rc genhtml_branch_coverage=1 00:32:50.143 --rc genhtml_function_coverage=1 00:32:50.143 --rc genhtml_legend=1 00:32:50.143 --rc geninfo_all_blocks=1 00:32:50.143 --rc geninfo_unexecuted_blocks=1 00:32:50.143 00:32:50.143 ' 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:50.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.143 --rc genhtml_branch_coverage=1 00:32:50.143 --rc genhtml_function_coverage=1 00:32:50.143 --rc genhtml_legend=1 00:32:50.143 --rc geninfo_all_blocks=1 00:32:50.143 --rc geninfo_unexecuted_blocks=1 00:32:50.143 00:32:50.143 ' 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:50.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.143 --rc genhtml_branch_coverage=1 00:32:50.143 --rc genhtml_function_coverage=1 00:32:50.143 --rc genhtml_legend=1 00:32:50.143 --rc geninfo_all_blocks=1 00:32:50.143 --rc geninfo_unexecuted_blocks=1 00:32:50.143 00:32:50.143 ' 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:50.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.143 --rc genhtml_branch_coverage=1 00:32:50.143 --rc genhtml_function_coverage=1 00:32:50.143 --rc genhtml_legend=1 00:32:50.143 --rc geninfo_all_blocks=1 00:32:50.143 --rc geninfo_unexecuted_blocks=1 00:32:50.143 00:32:50.143 ' 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.143 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:32:50.144 21:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:58.288 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:58.288 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:32:58.288 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:58.288 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:58.288 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:58.288 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:58.288 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:58.288 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:32:58.288 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:58.288 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:32:58.288 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:32:58.288 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:32:58.288 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:32:58.288 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:32:58.288 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:58.289 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:58.289 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:58.289 Found net devices under 0000:31:00.0: cvl_0_0 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:58.289 Found net devices under 0000:31:00.1: cvl_0_1 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:58.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:58.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:32:58.289 00:32:58.289 --- 10.0.0.2 ping statistics --- 00:32:58.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.289 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:58.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:58.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:32:58.289 00:32:58.289 --- 10.0.0.1 ping statistics --- 00:32:58.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.289 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:32:58.289 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2328252 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2328252 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2328252 ']' 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:58.290 21:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:58.290 [2024-12-05 21:25:59.630837] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:58.290 [2024-12-05 21:25:59.631812] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:32:58.290 [2024-12-05 21:25:59.631848] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.290 [2024-12-05 21:25:59.714930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:58.549 [2024-12-05 21:25:59.749801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:58.549 [2024-12-05 21:25:59.749834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:58.549 [2024-12-05 21:25:59.749844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:58.549 [2024-12-05 21:25:59.749852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:58.549 [2024-12-05 21:25:59.749858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:58.549 [2024-12-05 21:25:59.751084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.549 [2024-12-05 21:25:59.751089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.549 [2024-12-05 21:25:59.806547] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:58.549 [2024-12-05 21:25:59.807169] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:58.550 [2024-12-05 21:25:59.807480] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:59.120 [2024-12-05 21:26:00.459559] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:59.120 [2024-12-05 21:26:00.488222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:59.120 NULL1 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:59.120 Delay0 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2328395 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:59.120 21:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:59.381 [2024-12-05 21:26:00.583324] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:01.293 21:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:01.293 21:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.293 21:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 starting I/O failed: -6 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Write completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 starting I/O failed: -6 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Write completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 starting I/O failed: -6 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Write completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 starting I/O failed: -6 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 starting I/O failed: -6 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Write completed with error (sct=0, sc=8) 00:33:01.555 Write completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 starting I/O failed: -6 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Write completed with error (sct=0, sc=8) 00:33:01.555 Write completed with error (sct=0, sc=8) 00:33:01.555 starting I/O failed: -6 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Write completed with error (sct=0, sc=8) 00:33:01.555 Write completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 starting I/O failed: -6 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Write completed with error (sct=0, sc=8) 00:33:01.555 starting I/O failed: -6 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 starting I/O failed: -6 00:33:01.555 Write completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 starting I/O failed: -6 00:33:01.555 Write completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 starting I/O failed: -6 00:33:01.555 [2024-12-05 21:26:02.753164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a12f00 is same with the state(6) to be set 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.555 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 starting I/O failed: -6 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 starting I/O failed: -6 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 starting I/O failed: -6 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 starting I/O failed: -6 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 starting I/O failed: -6 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 starting I/O failed: -6 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 starting I/O failed: -6 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 starting I/O failed: -6 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 starting I/O failed: -6 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 [2024-12-05 21:26:02.755498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa7f800d4b0 is same with the state(6) to be set 00:33:01.556 starting I/O failed: -6 00:33:01.556 starting I/O failed: -6 00:33:01.556 starting I/O failed: -6 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Write completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:01.556 Read completed with error (sct=0, sc=8) 00:33:02.498 [2024-12-05 21:26:03.722176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a145f0 is same with the state(6) to be set 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 [2024-12-05 21:26:03.756643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a130e0 is same with the state(6) to be set 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 [2024-12-05 21:26:03.757715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a134a0 is same with the state(6) to be set 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 [2024-12-05 21:26:03.757968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa7f800d7e0 is same with the state(6) to be set 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 Write completed with error (sct=0, sc=8) 00:33:02.498 Read completed with error (sct=0, sc=8) 00:33:02.498 [2024-12-05 21:26:03.758104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa7f800d020 is same with the state(6) to be set 00:33:02.498 Initializing NVMe Controllers 00:33:02.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:02.498 Controller IO queue size 128, less than required. 00:33:02.498 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:02.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:02.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:02.498 Initialization complete. Launching workers. 00:33:02.498 ======================================================== 00:33:02.498 Latency(us) 00:33:02.498 Device Information : IOPS MiB/s Average min max 00:33:02.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.73 0.08 895118.96 240.70 1007782.86 00:33:02.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.31 0.07 955447.22 243.69 2001995.14 00:33:02.498 ======================================================== 00:33:02.498 Total : 322.04 0.16 923651.34 240.70 2001995.14 00:33:02.498 00:33:02.498 [2024-12-05 21:26:03.758639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a145f0 (9): Bad file descriptor 00:33:02.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:33:02.498 21:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.499 21:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:33:02.499 21:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2328395 00:33:02.499 21:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2328395 00:33:03.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2328395) - No such process 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2328395 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2328395 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2328395 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:03.069 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.070 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:03.070 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.070 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:03.070 [2024-12-05 21:26:04.291773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:03.070 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.070 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:03.070 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.070 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:03.070 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.070 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2329236 00:33:03.070 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:33:03.070 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:03.070 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2329236 00:33:03.070 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:03.070 [2024-12-05 21:26:04.363219] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:03.640 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:03.640 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2329236 00:33:03.640 21:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:03.925 21:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:03.925 21:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2329236 00:33:03.925 21:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:04.497 21:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:04.497 21:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2329236 00:33:04.497 21:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:05.068 21:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:05.068 21:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2329236 00:33:05.068 21:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:05.638 21:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:05.638 21:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2329236 00:33:05.638 21:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:06.209 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:06.209 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2329236 00:33:06.209 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:06.209 Initializing NVMe Controllers 00:33:06.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:06.209 Controller IO queue size 128, less than required. 00:33:06.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:06.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:06.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:06.209 Initialization complete. Launching workers. 00:33:06.209 ======================================================== 00:33:06.209 Latency(us) 00:33:06.209 Device Information : IOPS MiB/s Average min max 00:33:06.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002903.27 1000275.60 1043716.71 00:33:06.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003822.53 1000280.53 1041769.47 00:33:06.209 ======================================================== 00:33:06.209 Total : 256.00 0.12 1003362.90 1000275.60 1043716.71 00:33:06.209 00:33:06.470 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:06.470 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2329236 00:33:06.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2329236) - No such process 00:33:06.470 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2329236 00:33:06.470 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:06.470 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:33:06.470 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:06.470 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:33:06.470 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:06.470 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:33:06.470 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:06.470 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:06.470 rmmod nvme_tcp 00:33:06.470 rmmod nvme_fabrics 00:33:06.470 rmmod nvme_keyring 00:33:06.731 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:06.732 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:33:06.732 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:33:06.732 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2328252 ']' 00:33:06.732 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2328252 00:33:06.732 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2328252 ']' 00:33:06.732 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2328252 00:33:06.732 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:33:06.732 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:06.732 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2328252 00:33:06.732 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:06.732 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:06.732 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2328252' 00:33:06.732 killing process with pid 2328252 00:33:06.732 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2328252 00:33:06.732 21:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2328252 00:33:06.732 21:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:06.732 21:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:06.732 21:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:06.732 21:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:33:06.732 21:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:33:06.732 21:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:33:06.732 21:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:06.732 21:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:06.732 21:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:06.732 21:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.732 21:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:06.732 21:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.278 00:33:09.278 real 0m19.165s 00:33:09.278 user 0m27.042s 00:33:09.278 sys 0m7.861s 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:09.278 ************************************ 00:33:09.278 END TEST nvmf_delete_subsystem 00:33:09.278 ************************************ 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:09.278 ************************************ 00:33:09.278 START TEST nvmf_host_management 00:33:09.278 ************************************ 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:09.278 * Looking for test storage... 00:33:09.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:09.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.278 --rc genhtml_branch_coverage=1 00:33:09.278 --rc genhtml_function_coverage=1 00:33:09.278 --rc genhtml_legend=1 00:33:09.278 --rc geninfo_all_blocks=1 00:33:09.278 --rc geninfo_unexecuted_blocks=1 00:33:09.278 00:33:09.278 ' 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:09.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.278 --rc genhtml_branch_coverage=1 00:33:09.278 --rc genhtml_function_coverage=1 00:33:09.278 --rc genhtml_legend=1 00:33:09.278 --rc geninfo_all_blocks=1 00:33:09.278 --rc geninfo_unexecuted_blocks=1 00:33:09.278 00:33:09.278 ' 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:09.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.278 --rc genhtml_branch_coverage=1 00:33:09.278 --rc genhtml_function_coverage=1 00:33:09.278 --rc genhtml_legend=1 00:33:09.278 --rc geninfo_all_blocks=1 00:33:09.278 --rc geninfo_unexecuted_blocks=1 00:33:09.278 00:33:09.278 ' 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:09.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.278 --rc genhtml_branch_coverage=1 00:33:09.278 --rc genhtml_function_coverage=1 00:33:09.278 --rc genhtml_legend=1 00:33:09.278 --rc geninfo_all_blocks=1 00:33:09.278 --rc geninfo_unexecuted_blocks=1 00:33:09.278 00:33:09.278 ' 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.278 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:33:09.279 21:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:17.434 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:17.434 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:17.434 Found net devices under 0000:31:00.0: cvl_0_0 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:17.434 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:17.435 Found net devices under 0000:31:00.1: cvl_0_1 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:17.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:17.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:33:17.435 00:33:17.435 --- 10.0.0.2 ping statistics --- 00:33:17.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.435 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:17.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:17.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:33:17.435 00:33:17.435 --- 10.0.0.1 ping statistics --- 00:33:17.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.435 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2334582 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2334582 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2334582 ']' 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:17.435 21:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:17.435 [2024-12-05 21:26:18.743319] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:17.435 [2024-12-05 21:26:18.744310] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:33:17.435 [2024-12-05 21:26:18.744347] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:17.435 [2024-12-05 21:26:18.846678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:17.698 [2024-12-05 21:26:18.883046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:17.698 [2024-12-05 21:26:18.883082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:17.698 [2024-12-05 21:26:18.883090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:17.698 [2024-12-05 21:26:18.883097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:17.698 [2024-12-05 21:26:18.883106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:17.698 [2024-12-05 21:26:18.887877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:17.698 [2024-12-05 21:26:18.888035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:17.698 [2024-12-05 21:26:18.888237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.698 [2024-12-05 21:26:18.888237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:17.698 [2024-12-05 21:26:18.943815] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:17.698 [2024-12-05 21:26:18.944423] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:17.698 [2024-12-05 21:26:18.945372] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:17.698 [2024-12-05 21:26:18.945610] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:17.698 [2024-12-05 21:26:18.945763] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:18.270 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:18.270 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:18.270 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:18.270 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:18.270 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:18.270 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:18.271 [2024-12-05 21:26:19.573116] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:18.271 Malloc0 00:33:18.271 [2024-12-05 21:26:19.665389] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:18.271 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2334688 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2334688 /var/tmp/bdevperf.sock 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2334688 ']' 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:18.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:18.532 { 00:33:18.532 "params": { 00:33:18.532 "name": "Nvme$subsystem", 00:33:18.532 "trtype": "$TEST_TRANSPORT", 00:33:18.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.532 "adrfam": "ipv4", 00:33:18.532 "trsvcid": "$NVMF_PORT", 00:33:18.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.532 "hdgst": ${hdgst:-false}, 00:33:18.532 "ddgst": ${ddgst:-false} 00:33:18.532 }, 00:33:18.532 "method": "bdev_nvme_attach_controller" 00:33:18.532 } 00:33:18.532 EOF 00:33:18.532 )") 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:18.532 21:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:18.532 "params": { 00:33:18.532 "name": "Nvme0", 00:33:18.532 "trtype": "tcp", 00:33:18.532 "traddr": "10.0.0.2", 00:33:18.532 "adrfam": "ipv4", 00:33:18.532 "trsvcid": "4420", 00:33:18.532 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:18.532 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:18.532 "hdgst": false, 00:33:18.532 "ddgst": false 00:33:18.532 }, 00:33:18.532 "method": "bdev_nvme_attach_controller" 00:33:18.532 }' 00:33:18.532 [2024-12-05 21:26:19.777223] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:33:18.532 [2024-12-05 21:26:19.777289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2334688 ] 00:33:18.532 [2024-12-05 21:26:19.858021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.532 [2024-12-05 21:26:19.894432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.794 Running I/O for 10 seconds... 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:33:19.368 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.369 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:19.369 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.369 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:33:19.369 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:33:19.369 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:33:19.369 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:33:19.369 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:33:19.369 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:19.369 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.369 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:19.369 [2024-12-05 21:26:20.641077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.369 [2024-12-05 21:26:20.641712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.369 [2024-12-05 21:26:20.641721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.641740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.641758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.641776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.641793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.641812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.641829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.641846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.641868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.641886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.641903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.641919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.641936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.641954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.641972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.641989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.641996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.642006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.642014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.642024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.642033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.642043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.642050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.642061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.642069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.642079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.642086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.642096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.642103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.642113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.642121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.642131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.642139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.642148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.642155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.642165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.642175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.642185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.642192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.642202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.642209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.642219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.642226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.642236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.370 [2024-12-05 21:26:20.642244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.642272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:19.370 [2024-12-05 21:26:20.643505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:19.370 task offset: 85504 on job bdev=Nvme0n1 fails 00:33:19.370 00:33:19.370 Latency(us) 00:33:19.370 [2024-12-05T20:26:20.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.370 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:19.370 Job: Nvme0n1 ended in about 0.44 seconds with error 00:33:19.370 Verification LBA range: start 0x0 length 0x400 00:33:19.370 Nvme0n1 : 0.44 1462.67 91.42 145.13 0.00 38645.07 1597.44 36481.71 00:33:19.370 [2024-12-05T20:26:20.807Z] =================================================================================================================== 00:33:19.370 [2024-12-05T20:26:20.807Z] Total : 1462.67 91.42 145.13 0.00 38645.07 1597.44 36481.71 00:33:19.370 [2024-12-05 21:26:20.645515] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:19.370 [2024-12-05 21:26:20.645537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8eb10 (9): Bad file descriptor 00:33:19.370 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.370 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:19.370 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.370 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:19.370 [2024-12-05 21:26:20.646833] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:33:19.370 [2024-12-05 21:26:20.646911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:19.370 [2024-12-05 21:26:20.646934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.370 [2024-12-05 21:26:20.646949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:33:19.370 [2024-12-05 21:26:20.646956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:33:19.370 [2024-12-05 21:26:20.646964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.371 [2024-12-05 21:26:20.646972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8eb10 00:33:19.371 [2024-12-05 21:26:20.646991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8eb10 (9): Bad file descriptor 00:33:19.371 [2024-12-05 21:26:20.647005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:19.371 [2024-12-05 21:26:20.647012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:19.371 [2024-12-05 21:26:20.647020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:19.371 [2024-12-05 21:26:20.647029] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:19.371 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.371 21:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:33:20.313 21:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2334688 00:33:20.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2334688) - No such process 00:33:20.313 21:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:33:20.313 21:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:33:20.313 21:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:33:20.313 21:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:33:20.313 21:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:20.313 21:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:20.313 21:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:20.313 21:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:20.313 { 00:33:20.313 "params": { 00:33:20.313 "name": "Nvme$subsystem", 00:33:20.313 "trtype": "$TEST_TRANSPORT", 00:33:20.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.313 "adrfam": "ipv4", 00:33:20.313 "trsvcid": "$NVMF_PORT", 00:33:20.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.313 "hdgst": ${hdgst:-false}, 00:33:20.313 "ddgst": ${ddgst:-false} 00:33:20.313 }, 00:33:20.313 "method": "bdev_nvme_attach_controller" 00:33:20.313 } 00:33:20.313 EOF 00:33:20.313 )") 00:33:20.313 21:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:20.313 21:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:20.313 21:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:20.313 21:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:20.313 "params": { 00:33:20.313 "name": "Nvme0", 00:33:20.313 "trtype": "tcp", 00:33:20.313 "traddr": "10.0.0.2", 00:33:20.313 "adrfam": "ipv4", 00:33:20.313 "trsvcid": "4420", 00:33:20.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:20.313 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:20.313 "hdgst": false, 00:33:20.313 "ddgst": false 00:33:20.313 }, 00:33:20.313 "method": "bdev_nvme_attach_controller" 00:33:20.313 }' 00:33:20.313 [2024-12-05 21:26:21.715491] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:33:20.313 [2024-12-05 21:26:21.715548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2335062 ] 00:33:20.573 [2024-12-05 21:26:21.793887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.573 [2024-12-05 21:26:21.829881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.573 Running I/O for 1 seconds... 00:33:21.958 1850.00 IOPS, 115.62 MiB/s 00:33:21.958 Latency(us) 00:33:21.958 [2024-12-05T20:26:23.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.958 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:21.958 Verification LBA range: start 0x0 length 0x400 00:33:21.958 Nvme0n1 : 1.05 1821.91 113.87 0.00 0.00 33115.25 2757.97 44127.57 00:33:21.958 [2024-12-05T20:26:23.395Z] =================================================================================================================== 00:33:21.958 [2024-12-05T20:26:23.395Z] Total : 1821.91 113.87 0.00 0.00 33115.25 2757.97 44127.57 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:21.958 rmmod nvme_tcp 00:33:21.958 rmmod nvme_fabrics 00:33:21.958 rmmod nvme_keyring 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2334582 ']' 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2334582 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2334582 ']' 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2334582 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2334582 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2334582' 00:33:21.958 killing process with pid 2334582 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2334582 00:33:21.958 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2334582 00:33:21.958 [2024-12-05 21:26:23.385746] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:33:22.219 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:22.219 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:22.219 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:22.219 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:33:22.219 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:33:22.219 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:22.219 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:33:22.219 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:22.219 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:22.219 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.219 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.219 21:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.129 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:24.129 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:33:24.129 00:33:24.129 real 0m15.232s 00:33:24.129 user 0m19.062s 00:33:24.129 sys 0m7.835s 00:33:24.129 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:24.129 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:24.129 ************************************ 00:33:24.129 END TEST nvmf_host_management 00:33:24.129 ************************************ 00:33:24.129 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:24.129 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:24.129 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:24.129 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:24.390 ************************************ 00:33:24.390 START TEST nvmf_lvol 00:33:24.390 ************************************ 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:24.390 * Looking for test storage... 00:33:24.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:24.390 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:24.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.391 --rc genhtml_branch_coverage=1 00:33:24.391 --rc genhtml_function_coverage=1 00:33:24.391 --rc genhtml_legend=1 00:33:24.391 --rc geninfo_all_blocks=1 00:33:24.391 --rc geninfo_unexecuted_blocks=1 00:33:24.391 00:33:24.391 ' 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:24.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.391 --rc genhtml_branch_coverage=1 00:33:24.391 --rc genhtml_function_coverage=1 00:33:24.391 --rc genhtml_legend=1 00:33:24.391 --rc geninfo_all_blocks=1 00:33:24.391 --rc geninfo_unexecuted_blocks=1 00:33:24.391 00:33:24.391 ' 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:24.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.391 --rc genhtml_branch_coverage=1 00:33:24.391 --rc genhtml_function_coverage=1 00:33:24.391 --rc genhtml_legend=1 00:33:24.391 --rc geninfo_all_blocks=1 00:33:24.391 --rc geninfo_unexecuted_blocks=1 00:33:24.391 00:33:24.391 ' 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:24.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.391 --rc genhtml_branch_coverage=1 00:33:24.391 --rc genhtml_function_coverage=1 00:33:24.391 --rc genhtml_legend=1 00:33:24.391 --rc geninfo_all_blocks=1 00:33:24.391 --rc geninfo_unexecuted_blocks=1 00:33:24.391 00:33:24.391 ' 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:33:24.391 21:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:32.529 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:32.530 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:32.530 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:32.530 Found net devices under 0000:31:00.0: cvl_0_0 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:32.530 Found net devices under 0000:31:00.1: cvl_0_1 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:32.530 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:32.531 21:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:32.791 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:32.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:32.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.719 ms 00:33:32.792 00:33:32.792 --- 10.0.0.2 ping statistics --- 00:33:32.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.792 rtt min/avg/max/mdev = 0.719/0.719/0.719/0.000 ms 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:32.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:32.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:33:32.792 00:33:32.792 --- 10.0.0.1 ping statistics --- 00:33:32.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.792 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2340072 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2340072 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2340072 ']' 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.792 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:32.792 [2024-12-05 21:26:34.136680] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:32.792 [2024-12-05 21:26:34.137818] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:33:32.792 [2024-12-05 21:26:34.137876] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.052 [2024-12-05 21:26:34.229199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:33.052 [2024-12-05 21:26:34.270442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.052 [2024-12-05 21:26:34.270478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.052 [2024-12-05 21:26:34.270487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.052 [2024-12-05 21:26:34.270494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.052 [2024-12-05 21:26:34.270500] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.052 [2024-12-05 21:26:34.271899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.052 [2024-12-05 21:26:34.272079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:33.052 [2024-12-05 21:26:34.272083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.052 [2024-12-05 21:26:34.329278] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:33.052 [2024-12-05 21:26:34.329903] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:33.052 [2024-12-05 21:26:34.330162] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:33.052 [2024-12-05 21:26:34.330338] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:33.623 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:33.623 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:33:33.623 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:33.623 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:33.623 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:33.623 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:33.623 21:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:33.882 [2024-12-05 21:26:35.128934] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.882 21:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:34.141 21:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:33:34.141 21:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:34.141 21:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:33:34.141 21:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:33:34.428 21:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:33:34.747 21:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ce86492a-4c7b-4407-a2ac-b1ccf1a80618 00:33:34.747 21:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ce86492a-4c7b-4407-a2ac-b1ccf1a80618 lvol 20 00:33:34.747 21:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=41e635ac-0a35-4da8-a2ef-6059566bf786 00:33:34.747 21:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:35.007 21:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 41e635ac-0a35-4da8-a2ef-6059566bf786 00:33:35.007 21:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:35.268 [2024-12-05 21:26:36.540738] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:35.268 21:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:35.528 21:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2340766 00:33:35.528 21:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:33:35.528 21:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:33:36.473 21:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 41e635ac-0a35-4da8-a2ef-6059566bf786 MY_SNAPSHOT 00:33:36.736 21:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5ad1c13e-4374-48c4-a9a0-d499a2439d1a 00:33:36.736 21:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 41e635ac-0a35-4da8-a2ef-6059566bf786 30 00:33:36.997 21:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5ad1c13e-4374-48c4-a9a0-d499a2439d1a MY_CLONE 00:33:36.997 21:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ba729edb-c311-4b40-964f-a8a038fd7000 00:33:36.997 21:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ba729edb-c311-4b40-964f-a8a038fd7000 00:33:37.570 21:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2340766 00:33:45.712 Initializing NVMe Controllers 00:33:45.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:45.713 Controller IO queue size 128, less than required. 00:33:45.713 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:45.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:33:45.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:33:45.713 Initialization complete. Launching workers. 00:33:45.713 ======================================================== 00:33:45.713 Latency(us) 00:33:45.713 Device Information : IOPS MiB/s Average min max 00:33:45.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12260.60 47.89 10446.01 1568.63 64347.58 00:33:45.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15224.50 59.47 8407.83 2356.71 49757.85 00:33:45.713 ======================================================== 00:33:45.713 Total : 27485.10 107.36 9317.03 1568.63 64347.58 00:33:45.713 00:33:45.713 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:45.973 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 41e635ac-0a35-4da8-a2ef-6059566bf786 00:33:45.973 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce86492a-4c7b-4407-a2ac-b1ccf1a80618 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:46.233 rmmod nvme_tcp 00:33:46.233 rmmod nvme_fabrics 00:33:46.233 rmmod nvme_keyring 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2340072 ']' 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2340072 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2340072 ']' 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2340072 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:46.233 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2340072 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2340072' 00:33:46.494 killing process with pid 2340072 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2340072 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2340072 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.494 21:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.044 21:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:49.044 00:33:49.044 real 0m24.359s 00:33:49.044 user 0m55.758s 00:33:49.044 sys 0m11.035s 00:33:49.044 21:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:49.044 21:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:49.044 ************************************ 00:33:49.044 END TEST nvmf_lvol 00:33:49.044 ************************************ 00:33:49.044 21:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:49.044 21:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:49.044 21:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:49.044 21:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:49.044 ************************************ 00:33:49.044 START TEST nvmf_lvs_grow 00:33:49.044 ************************************ 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:49.044 * Looking for test storage... 00:33:49.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:49.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.044 --rc genhtml_branch_coverage=1 00:33:49.044 --rc genhtml_function_coverage=1 00:33:49.044 --rc genhtml_legend=1 00:33:49.044 --rc geninfo_all_blocks=1 00:33:49.044 --rc geninfo_unexecuted_blocks=1 00:33:49.044 00:33:49.044 ' 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:49.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.044 --rc genhtml_branch_coverage=1 00:33:49.044 --rc genhtml_function_coverage=1 00:33:49.044 --rc genhtml_legend=1 00:33:49.044 --rc geninfo_all_blocks=1 00:33:49.044 --rc geninfo_unexecuted_blocks=1 00:33:49.044 00:33:49.044 ' 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:49.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.044 --rc genhtml_branch_coverage=1 00:33:49.044 --rc genhtml_function_coverage=1 00:33:49.044 --rc genhtml_legend=1 00:33:49.044 --rc geninfo_all_blocks=1 00:33:49.044 --rc geninfo_unexecuted_blocks=1 00:33:49.044 00:33:49.044 ' 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:49.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:49.044 --rc genhtml_branch_coverage=1 00:33:49.044 --rc genhtml_function_coverage=1 00:33:49.044 --rc genhtml_legend=1 00:33:49.044 --rc geninfo_all_blocks=1 00:33:49.044 --rc geninfo_unexecuted_blocks=1 00:33:49.044 00:33:49.044 ' 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:49.044 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:33:49.045 21:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:57.188 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:57.188 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:57.188 Found net devices under 0000:31:00.0: cvl_0_0 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:57.188 Found net devices under 0000:31:00.1: cvl_0_1 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.188 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:57.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:57.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:33:57.189 00:33:57.189 --- 10.0.0.2 ping statistics --- 00:33:57.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.189 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:57.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:57.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:33:57.189 00:33:57.189 --- 10.0.0.1 ping statistics --- 00:33:57.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.189 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2347459 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2347459 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2347459 ']' 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:57.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:57.189 21:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:57.189 [2024-12-05 21:26:58.564480] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:57.189 [2024-12-05 21:26:58.566101] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:33:57.189 [2024-12-05 21:26:58.566177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:57.450 [2024-12-05 21:26:58.657280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.450 [2024-12-05 21:26:58.697483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:57.450 [2024-12-05 21:26:58.697520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:57.450 [2024-12-05 21:26:58.697529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:57.450 [2024-12-05 21:26:58.697535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:57.450 [2024-12-05 21:26:58.697541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:57.450 [2024-12-05 21:26:58.698116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.450 [2024-12-05 21:26:58.754756] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:57.450 [2024-12-05 21:26:58.755006] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:58.037 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:58.037 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:33:58.037 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:58.037 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:58.037 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:58.037 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:58.037 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:58.299 [2024-12-05 21:26:59.550598] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:58.299 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:58.299 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:58.299 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.299 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:58.299 ************************************ 00:33:58.299 START TEST lvs_grow_clean 00:33:58.299 ************************************ 00:33:58.299 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:33:58.299 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:58.299 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:58.299 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:58.299 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:58.299 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:58.299 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:58.299 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:58.299 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:58.299 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:58.561 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:58.561 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:58.561 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=922f846a-5daf-4ff9-ad31-9cad5da0d40e 00:33:58.561 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 922f846a-5daf-4ff9-ad31-9cad5da0d40e 00:33:58.561 21:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:58.822 21:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:58.822 21:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:58.822 21:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 922f846a-5daf-4ff9-ad31-9cad5da0d40e lvol 150 00:33:59.085 21:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b0a6d5b4-30c7-4be5-b864-995a4805d422 00:33:59.085 21:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:59.085 21:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:59.346 [2024-12-05 21:27:00.530480] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:59.346 [2024-12-05 21:27:00.530569] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:59.346 true 00:33:59.346 21:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 922f846a-5daf-4ff9-ad31-9cad5da0d40e 00:33:59.346 21:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:59.346 21:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:59.346 21:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:59.607 21:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b0a6d5b4-30c7-4be5-b864-995a4805d422 00:33:59.869 21:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:59.869 [2024-12-05 21:27:01.258859] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:59.869 21:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:00.130 21:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2348048 00:34:00.130 21:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:00.130 21:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:00.130 21:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2348048 /var/tmp/bdevperf.sock 00:34:00.130 21:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2348048 ']' 00:34:00.130 21:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:00.130 21:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:00.130 21:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:00.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:00.130 21:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:00.130 21:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:00.130 [2024-12-05 21:27:01.525643] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:34:00.130 [2024-12-05 21:27:01.525724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2348048 ] 00:34:00.392 [2024-12-05 21:27:01.623890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.392 [2024-12-05 21:27:01.674902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:00.963 21:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:00.963 21:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:34:00.963 21:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:01.224 Nvme0n1 00:34:01.224 21:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:01.485 [ 00:34:01.485 { 00:34:01.485 "name": "Nvme0n1", 00:34:01.485 "aliases": [ 00:34:01.485 "b0a6d5b4-30c7-4be5-b864-995a4805d422" 00:34:01.485 ], 00:34:01.485 "product_name": "NVMe disk", 00:34:01.485 "block_size": 4096, 00:34:01.485 "num_blocks": 38912, 00:34:01.485 "uuid": "b0a6d5b4-30c7-4be5-b864-995a4805d422", 00:34:01.485 "numa_id": 0, 00:34:01.485 "assigned_rate_limits": { 00:34:01.485 "rw_ios_per_sec": 0, 00:34:01.485 "rw_mbytes_per_sec": 0, 00:34:01.485 "r_mbytes_per_sec": 0, 00:34:01.485 "w_mbytes_per_sec": 0 00:34:01.485 }, 00:34:01.485 "claimed": false, 00:34:01.485 "zoned": false, 00:34:01.485 "supported_io_types": { 00:34:01.485 "read": true, 00:34:01.485 "write": true, 00:34:01.485 "unmap": true, 00:34:01.485 "flush": true, 00:34:01.485 "reset": true, 00:34:01.485 "nvme_admin": true, 00:34:01.485 "nvme_io": true, 00:34:01.485 "nvme_io_md": false, 00:34:01.485 "write_zeroes": true, 00:34:01.485 "zcopy": false, 00:34:01.486 "get_zone_info": false, 00:34:01.486 "zone_management": false, 00:34:01.486 "zone_append": false, 00:34:01.486 "compare": true, 00:34:01.486 "compare_and_write": true, 00:34:01.486 "abort": true, 00:34:01.486 "seek_hole": false, 00:34:01.486 "seek_data": false, 00:34:01.486 "copy": true, 00:34:01.486 "nvme_iov_md": false 00:34:01.486 }, 00:34:01.486 "memory_domains": [ 00:34:01.486 { 00:34:01.486 "dma_device_id": "system", 00:34:01.486 "dma_device_type": 1 00:34:01.486 } 00:34:01.486 ], 00:34:01.486 "driver_specific": { 00:34:01.486 "nvme": [ 00:34:01.486 { 00:34:01.486 "trid": { 00:34:01.486 "trtype": "TCP", 00:34:01.486 "adrfam": "IPv4", 00:34:01.486 "traddr": "10.0.0.2", 00:34:01.486 "trsvcid": "4420", 00:34:01.486 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:01.486 }, 00:34:01.486 "ctrlr_data": { 00:34:01.486 "cntlid": 1, 00:34:01.486 "vendor_id": "0x8086", 00:34:01.486 "model_number": "SPDK bdev Controller", 00:34:01.486 "serial_number": "SPDK0", 00:34:01.486 "firmware_revision": "25.01", 00:34:01.486 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:01.486 "oacs": { 00:34:01.486 "security": 0, 00:34:01.486 "format": 0, 00:34:01.486 "firmware": 0, 00:34:01.486 "ns_manage": 0 00:34:01.486 }, 00:34:01.486 "multi_ctrlr": true, 00:34:01.486 "ana_reporting": false 00:34:01.486 }, 00:34:01.486 "vs": { 00:34:01.486 "nvme_version": "1.3" 00:34:01.486 }, 00:34:01.486 "ns_data": { 00:34:01.486 "id": 1, 00:34:01.486 "can_share": true 00:34:01.486 } 00:34:01.486 } 00:34:01.486 ], 00:34:01.486 "mp_policy": "active_passive" 00:34:01.486 } 00:34:01.486 } 00:34:01.486 ] 00:34:01.486 21:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2348289 00:34:01.486 21:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:01.486 21:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:01.486 Running I/O for 10 seconds... 00:34:02.871 Latency(us) 00:34:02.872 [2024-12-05T20:27:04.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:02.872 Nvme0n1 : 1.00 17790.00 69.49 0.00 0.00 0.00 0.00 0.00 00:34:02.872 [2024-12-05T20:27:04.309Z] =================================================================================================================== 00:34:02.872 [2024-12-05T20:27:04.309Z] Total : 17790.00 69.49 0.00 0.00 0.00 0.00 0.00 00:34:02.872 00:34:03.443 21:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 922f846a-5daf-4ff9-ad31-9cad5da0d40e 00:34:03.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:03.705 Nvme0n1 : 2.00 17975.50 70.22 0.00 0.00 0.00 0.00 0.00 00:34:03.705 [2024-12-05T20:27:05.142Z] =================================================================================================================== 00:34:03.705 [2024-12-05T20:27:05.142Z] Total : 17975.50 70.22 0.00 0.00 0.00 0.00 0.00 00:34:03.705 00:34:03.705 true 00:34:03.705 21:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 922f846a-5daf-4ff9-ad31-9cad5da0d40e 00:34:03.705 21:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:03.966 21:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:03.966 21:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:03.966 21:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2348289 00:34:04.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:04.534 Nvme0n1 : 3.00 18016.33 70.38 0.00 0.00 0.00 0.00 0.00 00:34:04.534 [2024-12-05T20:27:05.971Z] =================================================================================================================== 00:34:04.534 [2024-12-05T20:27:05.971Z] Total : 18016.33 70.38 0.00 0.00 0.00 0.00 0.00 00:34:04.534 00:34:05.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:05.472 Nvme0n1 : 4.00 18068.25 70.58 0.00 0.00 0.00 0.00 0.00 00:34:05.472 [2024-12-05T20:27:06.909Z] =================================================================================================================== 00:34:05.472 [2024-12-05T20:27:06.909Z] Total : 18068.25 70.58 0.00 0.00 0.00 0.00 0.00 00:34:05.472 00:34:06.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:06.852 Nvme0n1 : 5.00 18086.80 70.65 0.00 0.00 0.00 0.00 0.00 00:34:06.852 [2024-12-05T20:27:08.289Z] =================================================================================================================== 00:34:06.852 [2024-12-05T20:27:08.289Z] Total : 18086.80 70.65 0.00 0.00 0.00 0.00 0.00 00:34:06.852 00:34:07.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:07.791 Nvme0n1 : 6.00 18109.83 70.74 0.00 0.00 0.00 0.00 0.00 00:34:07.791 [2024-12-05T20:27:09.228Z] =================================================================================================================== 00:34:07.791 [2024-12-05T20:27:09.228Z] Total : 18109.83 70.74 0.00 0.00 0.00 0.00 0.00 00:34:07.791 00:34:08.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:08.729 Nvme0n1 : 7.00 18126.14 70.81 0.00 0.00 0.00 0.00 0.00 00:34:08.729 [2024-12-05T20:27:10.166Z] =================================================================================================================== 00:34:08.729 [2024-12-05T20:27:10.166Z] Total : 18126.14 70.81 0.00 0.00 0.00 0.00 0.00 00:34:08.729 00:34:09.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:09.671 Nvme0n1 : 8.00 18130.50 70.82 0.00 0.00 0.00 0.00 0.00 00:34:09.671 [2024-12-05T20:27:11.108Z] =================================================================================================================== 00:34:09.671 [2024-12-05T20:27:11.108Z] Total : 18130.50 70.82 0.00 0.00 0.00 0.00 0.00 00:34:09.671 00:34:10.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:10.611 Nvme0n1 : 9.00 18148.00 70.89 0.00 0.00 0.00 0.00 0.00 00:34:10.611 [2024-12-05T20:27:12.048Z] =================================================================================================================== 00:34:10.611 [2024-12-05T20:27:12.048Z] Total : 18148.00 70.89 0.00 0.00 0.00 0.00 0.00 00:34:10.611 00:34:11.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:11.555 Nvme0n1 : 10.00 18162.00 70.95 0.00 0.00 0.00 0.00 0.00 00:34:11.555 [2024-12-05T20:27:12.992Z] =================================================================================================================== 00:34:11.555 [2024-12-05T20:27:12.992Z] Total : 18162.00 70.95 0.00 0.00 0.00 0.00 0.00 00:34:11.555 00:34:11.555 00:34:11.555 Latency(us) 00:34:11.555 [2024-12-05T20:27:12.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:11.555 Nvme0n1 : 10.01 18162.54 70.95 0.00 0.00 7044.71 2484.91 13707.95 00:34:11.555 [2024-12-05T20:27:12.992Z] =================================================================================================================== 00:34:11.555 [2024-12-05T20:27:12.992Z] Total : 18162.54 70.95 0.00 0.00 7044.71 2484.91 13707.95 00:34:11.555 { 00:34:11.555 "results": [ 00:34:11.555 { 00:34:11.555 "job": "Nvme0n1", 00:34:11.555 "core_mask": "0x2", 00:34:11.555 "workload": "randwrite", 00:34:11.555 "status": "finished", 00:34:11.555 "queue_depth": 128, 00:34:11.555 "io_size": 4096, 00:34:11.555 "runtime": 10.006752, 00:34:11.555 "iops": 18162.536655250376, 00:34:11.555 "mibps": 70.94740880957178, 00:34:11.555 "io_failed": 0, 00:34:11.555 "io_timeout": 0, 00:34:11.555 "avg_latency_us": 7044.711635011115, 00:34:11.555 "min_latency_us": 2484.9066666666668, 00:34:11.555 "max_latency_us": 13707.946666666667 00:34:11.555 } 00:34:11.555 ], 00:34:11.555 "core_count": 1 00:34:11.555 } 00:34:11.556 21:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2348048 00:34:11.556 21:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2348048 ']' 00:34:11.556 21:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2348048 00:34:11.556 21:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:34:11.556 21:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:11.556 21:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2348048 00:34:11.816 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:11.816 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:11.816 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2348048' 00:34:11.816 killing process with pid 2348048 00:34:11.816 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2348048 00:34:11.816 Received shutdown signal, test time was about 10.000000 seconds 00:34:11.816 00:34:11.816 Latency(us) 00:34:11.816 [2024-12-05T20:27:13.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.816 [2024-12-05T20:27:13.253Z] =================================================================================================================== 00:34:11.816 [2024-12-05T20:27:13.253Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:11.816 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2348048 00:34:11.816 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:12.076 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:12.076 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 922f846a-5daf-4ff9-ad31-9cad5da0d40e 00:34:12.076 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:12.336 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:12.336 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:34:12.336 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:12.596 [2024-12-05 21:27:13.774513] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:12.596 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 922f846a-5daf-4ff9-ad31-9cad5da0d40e 00:34:12.596 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:34:12.596 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 922f846a-5daf-4ff9-ad31-9cad5da0d40e 00:34:12.596 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:12.596 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:12.596 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:12.596 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:12.596 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:12.596 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:12.596 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:12.596 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:12.596 21:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 922f846a-5daf-4ff9-ad31-9cad5da0d40e 00:34:12.596 request: 00:34:12.596 { 00:34:12.596 "uuid": "922f846a-5daf-4ff9-ad31-9cad5da0d40e", 00:34:12.596 "method": "bdev_lvol_get_lvstores", 00:34:12.596 "req_id": 1 00:34:12.596 } 00:34:12.596 Got JSON-RPC error response 00:34:12.596 response: 00:34:12.596 { 00:34:12.596 "code": -19, 00:34:12.596 "message": "No such device" 00:34:12.596 } 00:34:12.596 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:34:12.596 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:12.596 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:12.596 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:12.596 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:12.855 aio_bdev 00:34:12.855 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b0a6d5b4-30c7-4be5-b864-995a4805d422 00:34:12.855 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b0a6d5b4-30c7-4be5-b864-995a4805d422 00:34:12.855 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:12.855 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:34:12.855 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:12.855 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:12.855 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:13.116 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b0a6d5b4-30c7-4be5-b864-995a4805d422 -t 2000 00:34:13.116 [ 00:34:13.116 { 00:34:13.116 "name": "b0a6d5b4-30c7-4be5-b864-995a4805d422", 00:34:13.116 "aliases": [ 00:34:13.116 "lvs/lvol" 00:34:13.116 ], 00:34:13.116 "product_name": "Logical Volume", 00:34:13.116 "block_size": 4096, 00:34:13.116 "num_blocks": 38912, 00:34:13.116 "uuid": "b0a6d5b4-30c7-4be5-b864-995a4805d422", 00:34:13.116 "assigned_rate_limits": { 00:34:13.116 "rw_ios_per_sec": 0, 00:34:13.116 "rw_mbytes_per_sec": 0, 00:34:13.116 "r_mbytes_per_sec": 0, 00:34:13.116 "w_mbytes_per_sec": 0 00:34:13.116 }, 00:34:13.116 "claimed": false, 00:34:13.116 "zoned": false, 00:34:13.116 "supported_io_types": { 00:34:13.116 "read": true, 00:34:13.116 "write": true, 00:34:13.116 "unmap": true, 00:34:13.116 "flush": false, 00:34:13.116 "reset": true, 00:34:13.116 "nvme_admin": false, 00:34:13.116 "nvme_io": false, 00:34:13.116 "nvme_io_md": false, 00:34:13.116 "write_zeroes": true, 00:34:13.116 "zcopy": false, 00:34:13.116 "get_zone_info": false, 00:34:13.116 "zone_management": false, 00:34:13.116 "zone_append": false, 00:34:13.116 "compare": false, 00:34:13.116 "compare_and_write": false, 00:34:13.116 "abort": false, 00:34:13.116 "seek_hole": true, 00:34:13.116 "seek_data": true, 00:34:13.116 "copy": false, 00:34:13.116 "nvme_iov_md": false 00:34:13.116 }, 00:34:13.116 "driver_specific": { 00:34:13.116 "lvol": { 00:34:13.116 "lvol_store_uuid": "922f846a-5daf-4ff9-ad31-9cad5da0d40e", 00:34:13.116 "base_bdev": "aio_bdev", 00:34:13.116 "thin_provision": false, 00:34:13.116 "num_allocated_clusters": 38, 00:34:13.116 "snapshot": false, 00:34:13.116 "clone": false, 00:34:13.116 "esnap_clone": false 00:34:13.116 } 00:34:13.116 } 00:34:13.116 } 00:34:13.116 ] 00:34:13.116 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:34:13.116 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 922f846a-5daf-4ff9-ad31-9cad5da0d40e 00:34:13.116 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:13.376 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:13.376 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 922f846a-5daf-4ff9-ad31-9cad5da0d40e 00:34:13.376 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:13.638 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:13.638 21:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b0a6d5b4-30c7-4be5-b864-995a4805d422 00:34:13.638 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 922f846a-5daf-4ff9-ad31-9cad5da0d40e 00:34:13.898 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:13.898 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:14.182 00:34:14.182 real 0m15.738s 00:34:14.182 user 0m15.399s 00:34:14.182 sys 0m1.469s 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:14.182 ************************************ 00:34:14.182 END TEST lvs_grow_clean 00:34:14.182 ************************************ 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:14.182 ************************************ 00:34:14.182 START TEST lvs_grow_dirty 00:34:14.182 ************************************ 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:14.182 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:14.443 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:14.443 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:14.443 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cca88d66-9c3c-4188-a8a9-abcf574b6f4a 00:34:14.443 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cca88d66-9c3c-4188-a8a9-abcf574b6f4a 00:34:14.443 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:14.704 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:14.704 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:14.704 21:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cca88d66-9c3c-4188-a8a9-abcf574b6f4a lvol 150 00:34:14.964 21:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=beafc4cf-d9b7-48c8-8422-5f064f2ccb59 00:34:14.964 21:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:14.964 21:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:14.964 [2024-12-05 21:27:16.302487] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:14.964 [2024-12-05 21:27:16.302557] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:14.964 true 00:34:14.964 21:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cca88d66-9c3c-4188-a8a9-abcf574b6f4a 00:34:14.964 21:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:15.225 21:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:15.225 21:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:15.486 21:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 beafc4cf-d9b7-48c8-8422-5f064f2ccb59 00:34:15.486 21:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:15.747 [2024-12-05 21:27:16.990777] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:15.747 21:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:15.747 21:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2351472 00:34:15.747 21:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:15.747 21:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:15.747 21:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2351472 /var/tmp/bdevperf.sock 00:34:15.747 21:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2351472 ']' 00:34:15.747 21:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:15.747 21:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:15.747 21:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:15.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:15.747 21:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:15.747 21:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:16.010 [2024-12-05 21:27:17.212108] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:34:16.010 [2024-12-05 21:27:17.212163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2351472 ] 00:34:16.010 [2024-12-05 21:27:17.302750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.010 [2024-12-05 21:27:17.332464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.582 21:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:16.582 21:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:16.582 21:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:17.152 Nvme0n1 00:34:17.152 21:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:17.152 [ 00:34:17.152 { 00:34:17.152 "name": "Nvme0n1", 00:34:17.152 "aliases": [ 00:34:17.152 "beafc4cf-d9b7-48c8-8422-5f064f2ccb59" 00:34:17.152 ], 00:34:17.152 "product_name": "NVMe disk", 00:34:17.152 "block_size": 4096, 00:34:17.152 "num_blocks": 38912, 00:34:17.152 "uuid": "beafc4cf-d9b7-48c8-8422-5f064f2ccb59", 00:34:17.152 "numa_id": 0, 00:34:17.152 "assigned_rate_limits": { 00:34:17.152 "rw_ios_per_sec": 0, 00:34:17.152 "rw_mbytes_per_sec": 0, 00:34:17.152 "r_mbytes_per_sec": 0, 00:34:17.152 "w_mbytes_per_sec": 0 00:34:17.152 }, 00:34:17.152 "claimed": false, 00:34:17.153 "zoned": false, 00:34:17.153 "supported_io_types": { 00:34:17.153 "read": true, 00:34:17.153 "write": true, 00:34:17.153 "unmap": true, 00:34:17.153 "flush": true, 00:34:17.153 "reset": true, 00:34:17.153 "nvme_admin": true, 00:34:17.153 "nvme_io": true, 00:34:17.153 "nvme_io_md": false, 00:34:17.153 "write_zeroes": true, 00:34:17.153 "zcopy": false, 00:34:17.153 "get_zone_info": false, 00:34:17.153 "zone_management": false, 00:34:17.153 "zone_append": false, 00:34:17.153 "compare": true, 00:34:17.153 "compare_and_write": true, 00:34:17.153 "abort": true, 00:34:17.153 "seek_hole": false, 00:34:17.153 "seek_data": false, 00:34:17.153 "copy": true, 00:34:17.153 "nvme_iov_md": false 00:34:17.153 }, 00:34:17.153 "memory_domains": [ 00:34:17.153 { 00:34:17.153 "dma_device_id": "system", 00:34:17.153 "dma_device_type": 1 00:34:17.153 } 00:34:17.153 ], 00:34:17.153 "driver_specific": { 00:34:17.153 "nvme": [ 00:34:17.153 { 00:34:17.153 "trid": { 00:34:17.153 "trtype": "TCP", 00:34:17.153 "adrfam": "IPv4", 00:34:17.153 "traddr": "10.0.0.2", 00:34:17.153 "trsvcid": "4420", 00:34:17.153 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:17.153 }, 00:34:17.153 "ctrlr_data": { 00:34:17.153 "cntlid": 1, 00:34:17.153 "vendor_id": "0x8086", 00:34:17.153 "model_number": "SPDK bdev Controller", 00:34:17.153 "serial_number": "SPDK0", 00:34:17.153 "firmware_revision": "25.01", 00:34:17.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:17.153 "oacs": { 00:34:17.153 "security": 0, 00:34:17.153 "format": 0, 00:34:17.153 "firmware": 0, 00:34:17.153 "ns_manage": 0 00:34:17.153 }, 00:34:17.153 "multi_ctrlr": true, 00:34:17.153 "ana_reporting": false 00:34:17.153 }, 00:34:17.153 "vs": { 00:34:17.153 "nvme_version": "1.3" 00:34:17.153 }, 00:34:17.153 "ns_data": { 00:34:17.153 "id": 1, 00:34:17.153 "can_share": true 00:34:17.153 } 00:34:17.153 } 00:34:17.153 ], 00:34:17.153 "mp_policy": "active_passive" 00:34:17.153 } 00:34:17.153 } 00:34:17.153 ] 00:34:17.153 21:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2351812 00:34:17.153 21:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:17.153 21:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:17.413 Running I/O for 10 seconds... 00:34:18.355 Latency(us) 00:34:18.355 [2024-12-05T20:27:19.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:18.355 Nvme0n1 : 1.00 17916.00 69.98 0.00 0.00 0.00 0.00 0.00 00:34:18.355 [2024-12-05T20:27:19.792Z] =================================================================================================================== 00:34:18.355 [2024-12-05T20:27:19.792Z] Total : 17916.00 69.98 0.00 0.00 0.00 0.00 0.00 00:34:18.355 00:34:19.295 21:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cca88d66-9c3c-4188-a8a9-abcf574b6f4a 00:34:19.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:19.295 Nvme0n1 : 2.00 17975.00 70.21 0.00 0.00 0.00 0.00 0.00 00:34:19.295 [2024-12-05T20:27:20.732Z] =================================================================================================================== 00:34:19.295 [2024-12-05T20:27:20.732Z] Total : 17975.00 70.21 0.00 0.00 0.00 0.00 0.00 00:34:19.295 00:34:19.295 true 00:34:19.555 21:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cca88d66-9c3c-4188-a8a9-abcf574b6f4a 00:34:19.556 21:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:19.556 21:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:19.556 21:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:19.556 21:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2351812 00:34:20.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:20.500 Nvme0n1 : 3.00 17994.67 70.29 0.00 0.00 0.00 0.00 0.00 00:34:20.500 [2024-12-05T20:27:21.937Z] =================================================================================================================== 00:34:20.500 [2024-12-05T20:27:21.937Z] Total : 17994.67 70.29 0.00 0.00 0.00 0.00 0.00 00:34:20.500 00:34:21.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:21.443 Nvme0n1 : 4.00 18036.25 70.45 0.00 0.00 0.00 0.00 0.00 00:34:21.443 [2024-12-05T20:27:22.880Z] =================================================================================================================== 00:34:21.443 [2024-12-05T20:27:22.880Z] Total : 18036.25 70.45 0.00 0.00 0.00 0.00 0.00 00:34:21.443 00:34:22.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:22.387 Nvme0n1 : 5.00 18074.00 70.60 0.00 0.00 0.00 0.00 0.00 00:34:22.387 [2024-12-05T20:27:23.824Z] =================================================================================================================== 00:34:22.387 [2024-12-05T20:27:23.824Z] Total : 18074.00 70.60 0.00 0.00 0.00 0.00 0.00 00:34:22.387 00:34:23.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:23.328 Nvme0n1 : 6.00 18099.00 70.70 0.00 0.00 0.00 0.00 0.00 00:34:23.328 [2024-12-05T20:27:24.765Z] =================================================================================================================== 00:34:23.328 [2024-12-05T20:27:24.765Z] Total : 18099.00 70.70 0.00 0.00 0.00 0.00 0.00 00:34:23.328 00:34:24.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:24.269 Nvme0n1 : 7.00 18110.29 70.74 0.00 0.00 0.00 0.00 0.00 00:34:24.269 [2024-12-05T20:27:25.706Z] =================================================================================================================== 00:34:24.269 [2024-12-05T20:27:25.706Z] Total : 18110.29 70.74 0.00 0.00 0.00 0.00 0.00 00:34:24.269 00:34:25.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:25.655 Nvme0n1 : 8.00 18132.50 70.83 0.00 0.00 0.00 0.00 0.00 00:34:25.655 [2024-12-05T20:27:27.092Z] =================================================================================================================== 00:34:25.655 [2024-12-05T20:27:27.092Z] Total : 18132.50 70.83 0.00 0.00 0.00 0.00 0.00 00:34:25.655 00:34:26.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:26.226 Nvme0n1 : 9.00 18135.67 70.84 0.00 0.00 0.00 0.00 0.00 00:34:26.226 [2024-12-05T20:27:27.663Z] =================================================================================================================== 00:34:26.226 [2024-12-05T20:27:27.663Z] Total : 18135.67 70.84 0.00 0.00 0.00 0.00 0.00 00:34:26.226 00:34:27.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:27.608 Nvme0n1 : 10.00 18150.90 70.90 0.00 0.00 0.00 0.00 0.00 00:34:27.608 [2024-12-05T20:27:29.045Z] =================================================================================================================== 00:34:27.608 [2024-12-05T20:27:29.045Z] Total : 18150.90 70.90 0.00 0.00 0.00 0.00 0.00 00:34:27.608 00:34:27.608 00:34:27.608 Latency(us) 00:34:27.608 [2024-12-05T20:27:29.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:27.608 Nvme0n1 : 10.01 18152.62 70.91 0.00 0.00 7049.60 1747.63 13380.27 00:34:27.608 [2024-12-05T20:27:29.045Z] =================================================================================================================== 00:34:27.608 [2024-12-05T20:27:29.045Z] Total : 18152.62 70.91 0.00 0.00 7049.60 1747.63 13380.27 00:34:27.608 { 00:34:27.608 "results": [ 00:34:27.608 { 00:34:27.608 "job": "Nvme0n1", 00:34:27.608 "core_mask": "0x2", 00:34:27.608 "workload": "randwrite", 00:34:27.608 "status": "finished", 00:34:27.608 "queue_depth": 128, 00:34:27.608 "io_size": 4096, 00:34:27.608 "runtime": 10.006104, 00:34:27.608 "iops": 18152.619640971152, 00:34:27.608 "mibps": 70.90867047254356, 00:34:27.608 "io_failed": 0, 00:34:27.608 "io_timeout": 0, 00:34:27.608 "avg_latency_us": 7049.595704839872, 00:34:27.608 "min_latency_us": 1747.6266666666668, 00:34:27.608 "max_latency_us": 13380.266666666666 00:34:27.608 } 00:34:27.608 ], 00:34:27.608 "core_count": 1 00:34:27.608 } 00:34:27.608 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2351472 00:34:27.608 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2351472 ']' 00:34:27.608 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2351472 00:34:27.608 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:34:27.608 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:27.608 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2351472 00:34:27.608 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:27.608 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:27.608 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2351472' 00:34:27.608 killing process with pid 2351472 00:34:27.608 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2351472 00:34:27.608 Received shutdown signal, test time was about 10.000000 seconds 00:34:27.608 00:34:27.608 Latency(us) 00:34:27.608 [2024-12-05T20:27:29.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.608 [2024-12-05T20:27:29.045Z] =================================================================================================================== 00:34:27.608 [2024-12-05T20:27:29.045Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:27.608 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2351472 00:34:27.608 21:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:27.608 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:27.896 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cca88d66-9c3c-4188-a8a9-abcf574b6f4a 00:34:27.896 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2347459 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2347459 00:34:28.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2347459 Killed "${NVMF_APP[@]}" "$@" 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2353828 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2353828 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2353828 ']' 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:28.190 21:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:28.190 [2024-12-05 21:27:29.482524] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:28.190 [2024-12-05 21:27:29.483934] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:34:28.190 [2024-12-05 21:27:29.483995] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:28.190 [2024-12-05 21:27:29.571009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.190 [2024-12-05 21:27:29.607356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:28.190 [2024-12-05 21:27:29.607392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:28.190 [2024-12-05 21:27:29.607400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:28.190 [2024-12-05 21:27:29.607407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:28.190 [2024-12-05 21:27:29.607412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:28.190 [2024-12-05 21:27:29.607984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.492 [2024-12-05 21:27:29.663691] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:28.492 [2024-12-05 21:27:29.663954] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:29.062 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:29.062 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:29.062 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:29.062 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:29.062 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:29.062 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:29.062 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:29.062 [2024-12-05 21:27:30.462491] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:29.062 [2024-12-05 21:27:30.462596] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:29.062 [2024-12-05 21:27:30.462628] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:29.062 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:34:29.062 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev beafc4cf-d9b7-48c8-8422-5f064f2ccb59 00:34:29.062 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=beafc4cf-d9b7-48c8-8422-5f064f2ccb59 00:34:29.062 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:29.062 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:29.062 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:29.062 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:29.062 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:29.321 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b beafc4cf-d9b7-48c8-8422-5f064f2ccb59 -t 2000 00:34:29.581 [ 00:34:29.581 { 00:34:29.581 "name": "beafc4cf-d9b7-48c8-8422-5f064f2ccb59", 00:34:29.581 "aliases": [ 00:34:29.581 "lvs/lvol" 00:34:29.581 ], 00:34:29.581 "product_name": "Logical Volume", 00:34:29.581 "block_size": 4096, 00:34:29.581 "num_blocks": 38912, 00:34:29.581 "uuid": "beafc4cf-d9b7-48c8-8422-5f064f2ccb59", 00:34:29.581 "assigned_rate_limits": { 00:34:29.581 "rw_ios_per_sec": 0, 00:34:29.581 "rw_mbytes_per_sec": 0, 00:34:29.581 "r_mbytes_per_sec": 0, 00:34:29.581 "w_mbytes_per_sec": 0 00:34:29.581 }, 00:34:29.581 "claimed": false, 00:34:29.581 "zoned": false, 00:34:29.581 "supported_io_types": { 00:34:29.581 "read": true, 00:34:29.581 "write": true, 00:34:29.581 "unmap": true, 00:34:29.581 "flush": false, 00:34:29.581 "reset": true, 00:34:29.581 "nvme_admin": false, 00:34:29.581 "nvme_io": false, 00:34:29.581 "nvme_io_md": false, 00:34:29.581 "write_zeroes": true, 00:34:29.581 "zcopy": false, 00:34:29.581 "get_zone_info": false, 00:34:29.581 "zone_management": false, 00:34:29.581 "zone_append": false, 00:34:29.581 "compare": false, 00:34:29.581 "compare_and_write": false, 00:34:29.581 "abort": false, 00:34:29.581 "seek_hole": true, 00:34:29.581 "seek_data": true, 00:34:29.581 "copy": false, 00:34:29.581 "nvme_iov_md": false 00:34:29.581 }, 00:34:29.581 "driver_specific": { 00:34:29.581 "lvol": { 00:34:29.581 "lvol_store_uuid": "cca88d66-9c3c-4188-a8a9-abcf574b6f4a", 00:34:29.581 "base_bdev": "aio_bdev", 00:34:29.581 "thin_provision": false, 00:34:29.581 "num_allocated_clusters": 38, 00:34:29.581 "snapshot": false, 00:34:29.581 "clone": false, 00:34:29.581 "esnap_clone": false 00:34:29.581 } 00:34:29.581 } 00:34:29.581 } 00:34:29.581 ] 00:34:29.581 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:29.581 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cca88d66-9c3c-4188-a8a9-abcf574b6f4a 00:34:29.581 21:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:34:29.581 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:34:29.581 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cca88d66-9c3c-4188-a8a9-abcf574b6f4a 00:34:29.581 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:34:29.842 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:34:29.842 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:30.103 [2024-12-05 21:27:31.332510] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:30.103 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cca88d66-9c3c-4188-a8a9-abcf574b6f4a 00:34:30.103 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:34:30.103 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cca88d66-9c3c-4188-a8a9-abcf574b6f4a 00:34:30.103 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:30.104 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:30.104 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:30.104 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:30.104 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:30.104 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:30.104 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:30.104 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:30.104 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cca88d66-9c3c-4188-a8a9-abcf574b6f4a 00:34:30.364 request: 00:34:30.364 { 00:34:30.364 "uuid": "cca88d66-9c3c-4188-a8a9-abcf574b6f4a", 00:34:30.364 "method": "bdev_lvol_get_lvstores", 00:34:30.364 "req_id": 1 00:34:30.364 } 00:34:30.364 Got JSON-RPC error response 00:34:30.364 response: 00:34:30.364 { 00:34:30.364 "code": -19, 00:34:30.364 "message": "No such device" 00:34:30.364 } 00:34:30.364 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:34:30.364 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:30.364 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:30.364 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:30.364 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:30.364 aio_bdev 00:34:30.364 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev beafc4cf-d9b7-48c8-8422-5f064f2ccb59 00:34:30.364 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=beafc4cf-d9b7-48c8-8422-5f064f2ccb59 00:34:30.364 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:30.364 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:30.364 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:30.364 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:30.364 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:30.627 21:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b beafc4cf-d9b7-48c8-8422-5f064f2ccb59 -t 2000 00:34:30.627 [ 00:34:30.627 { 00:34:30.627 "name": "beafc4cf-d9b7-48c8-8422-5f064f2ccb59", 00:34:30.627 "aliases": [ 00:34:30.627 "lvs/lvol" 00:34:30.627 ], 00:34:30.627 "product_name": "Logical Volume", 00:34:30.627 "block_size": 4096, 00:34:30.627 "num_blocks": 38912, 00:34:30.627 "uuid": "beafc4cf-d9b7-48c8-8422-5f064f2ccb59", 00:34:30.627 "assigned_rate_limits": { 00:34:30.627 "rw_ios_per_sec": 0, 00:34:30.627 "rw_mbytes_per_sec": 0, 00:34:30.627 "r_mbytes_per_sec": 0, 00:34:30.627 "w_mbytes_per_sec": 0 00:34:30.627 }, 00:34:30.627 "claimed": false, 00:34:30.627 "zoned": false, 00:34:30.627 "supported_io_types": { 00:34:30.627 "read": true, 00:34:30.627 "write": true, 00:34:30.627 "unmap": true, 00:34:30.627 "flush": false, 00:34:30.627 "reset": true, 00:34:30.627 "nvme_admin": false, 00:34:30.627 "nvme_io": false, 00:34:30.627 "nvme_io_md": false, 00:34:30.627 "write_zeroes": true, 00:34:30.627 "zcopy": false, 00:34:30.627 "get_zone_info": false, 00:34:30.627 "zone_management": false, 00:34:30.627 "zone_append": false, 00:34:30.627 "compare": false, 00:34:30.627 "compare_and_write": false, 00:34:30.627 "abort": false, 00:34:30.627 "seek_hole": true, 00:34:30.627 "seek_data": true, 00:34:30.627 "copy": false, 00:34:30.627 "nvme_iov_md": false 00:34:30.627 }, 00:34:30.627 "driver_specific": { 00:34:30.627 "lvol": { 00:34:30.627 "lvol_store_uuid": "cca88d66-9c3c-4188-a8a9-abcf574b6f4a", 00:34:30.627 "base_bdev": "aio_bdev", 00:34:30.627 "thin_provision": false, 00:34:30.627 "num_allocated_clusters": 38, 00:34:30.627 "snapshot": false, 00:34:30.627 "clone": false, 00:34:30.627 "esnap_clone": false 00:34:30.627 } 00:34:30.627 } 00:34:30.627 } 00:34:30.627 ] 00:34:30.627 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:30.627 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cca88d66-9c3c-4188-a8a9-abcf574b6f4a 00:34:30.627 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:30.890 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:30.890 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cca88d66-9c3c-4188-a8a9-abcf574b6f4a 00:34:30.890 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:31.153 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:31.153 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete beafc4cf-d9b7-48c8-8422-5f064f2ccb59 00:34:31.153 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cca88d66-9c3c-4188-a8a9-abcf574b6f4a 00:34:31.414 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:31.674 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:31.674 00:34:31.674 real 0m17.498s 00:34:31.674 user 0m35.416s 00:34:31.674 sys 0m2.937s 00:34:31.674 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:31.674 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:31.674 ************************************ 00:34:31.674 END TEST lvs_grow_dirty 00:34:31.674 ************************************ 00:34:31.674 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:31.674 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:34:31.674 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:34:31.674 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:34:31.675 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:31.675 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:34:31.675 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:34:31.675 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:34:31.675 21:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:31.675 nvmf_trace.0 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:31.675 rmmod nvme_tcp 00:34:31.675 rmmod nvme_fabrics 00:34:31.675 rmmod nvme_keyring 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2353828 ']' 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2353828 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2353828 ']' 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2353828 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:31.675 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2353828 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2353828' 00:34:31.935 killing process with pid 2353828 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2353828 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2353828 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:31.935 21:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:34.496 00:34:34.496 real 0m45.328s 00:34:34.496 user 0m54.004s 00:34:34.496 sys 0m11.027s 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:34.496 ************************************ 00:34:34.496 END TEST nvmf_lvs_grow 00:34:34.496 ************************************ 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:34.496 ************************************ 00:34:34.496 START TEST nvmf_bdev_io_wait 00:34:34.496 ************************************ 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:34.496 * Looking for test storage... 00:34:34.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:34.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.496 --rc genhtml_branch_coverage=1 00:34:34.496 --rc genhtml_function_coverage=1 00:34:34.496 --rc genhtml_legend=1 00:34:34.496 --rc geninfo_all_blocks=1 00:34:34.496 --rc geninfo_unexecuted_blocks=1 00:34:34.496 00:34:34.496 ' 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:34.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.496 --rc genhtml_branch_coverage=1 00:34:34.496 --rc genhtml_function_coverage=1 00:34:34.496 --rc genhtml_legend=1 00:34:34.496 --rc geninfo_all_blocks=1 00:34:34.496 --rc geninfo_unexecuted_blocks=1 00:34:34.496 00:34:34.496 ' 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:34.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.496 --rc genhtml_branch_coverage=1 00:34:34.496 --rc genhtml_function_coverage=1 00:34:34.496 --rc genhtml_legend=1 00:34:34.496 --rc geninfo_all_blocks=1 00:34:34.496 --rc geninfo_unexecuted_blocks=1 00:34:34.496 00:34:34.496 ' 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:34.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.496 --rc genhtml_branch_coverage=1 00:34:34.496 --rc genhtml_function_coverage=1 00:34:34.496 --rc genhtml_legend=1 00:34:34.496 --rc geninfo_all_blocks=1 00:34:34.496 --rc geninfo_unexecuted_blocks=1 00:34:34.496 00:34:34.496 ' 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.496 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:34:34.497 21:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:42.641 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:42.641 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:42.641 Found net devices under 0000:31:00.0: cvl_0_0 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:42.641 Found net devices under 0000:31:00.1: cvl_0_1 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:42.641 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:42.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:42.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:34:42.642 00:34:42.642 --- 10.0.0.2 ping statistics --- 00:34:42.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.642 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:42.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:42.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:34:42.642 00:34:42.642 --- 10.0.0.1 ping statistics --- 00:34:42.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.642 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2359243 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2359243 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2359243 ']' 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:42.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:42.642 21:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:42.642 [2024-12-05 21:27:43.874245] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:42.642 [2024-12-05 21:27:43.875231] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:34:42.642 [2024-12-05 21:27:43.875269] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:42.642 [2024-12-05 21:27:43.959373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:42.642 [2024-12-05 21:27:43.996224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:42.642 [2024-12-05 21:27:43.996257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:42.642 [2024-12-05 21:27:43.996265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:42.642 [2024-12-05 21:27:43.996272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:42.642 [2024-12-05 21:27:43.996278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:42.642 [2024-12-05 21:27:43.997961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:42.642 [2024-12-05 21:27:43.998212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:42.642 [2024-12-05 21:27:43.998367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:42.642 [2024-12-05 21:27:43.998367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:42.642 [2024-12-05 21:27:43.998723] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.585 [2024-12-05 21:27:44.748826] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:43.585 [2024-12-05 21:27:44.749083] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:43.585 [2024-12-05 21:27:44.749884] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:43.585 [2024-12-05 21:27:44.749932] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.585 [2024-12-05 21:27:44.759250] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.585 Malloc0 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:43.585 [2024-12-05 21:27:44.823098] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2359594 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2359596 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:43.585 { 00:34:43.585 "params": { 00:34:43.585 "name": "Nvme$subsystem", 00:34:43.585 "trtype": "$TEST_TRANSPORT", 00:34:43.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.585 "adrfam": "ipv4", 00:34:43.585 "trsvcid": "$NVMF_PORT", 00:34:43.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.585 "hdgst": ${hdgst:-false}, 00:34:43.585 "ddgst": ${ddgst:-false} 00:34:43.585 }, 00:34:43.585 "method": "bdev_nvme_attach_controller" 00:34:43.585 } 00:34:43.585 EOF 00:34:43.585 )") 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2359598 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:43.585 { 00:34:43.585 "params": { 00:34:43.585 "name": "Nvme$subsystem", 00:34:43.585 "trtype": "$TEST_TRANSPORT", 00:34:43.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.585 "adrfam": "ipv4", 00:34:43.585 "trsvcid": "$NVMF_PORT", 00:34:43.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.585 "hdgst": ${hdgst:-false}, 00:34:43.585 "ddgst": ${ddgst:-false} 00:34:43.585 }, 00:34:43.585 "method": "bdev_nvme_attach_controller" 00:34:43.585 } 00:34:43.585 EOF 00:34:43.585 )") 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2359601 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:43.585 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:43.586 { 00:34:43.586 "params": { 00:34:43.586 "name": "Nvme$subsystem", 00:34:43.586 "trtype": "$TEST_TRANSPORT", 00:34:43.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.586 "adrfam": "ipv4", 00:34:43.586 "trsvcid": "$NVMF_PORT", 00:34:43.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.586 "hdgst": ${hdgst:-false}, 00:34:43.586 "ddgst": ${ddgst:-false} 00:34:43.586 }, 00:34:43.586 "method": "bdev_nvme_attach_controller" 00:34:43.586 } 00:34:43.586 EOF 00:34:43.586 )") 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:43.586 { 00:34:43.586 "params": { 00:34:43.586 "name": "Nvme$subsystem", 00:34:43.586 "trtype": "$TEST_TRANSPORT", 00:34:43.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.586 "adrfam": "ipv4", 00:34:43.586 "trsvcid": "$NVMF_PORT", 00:34:43.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.586 "hdgst": ${hdgst:-false}, 00:34:43.586 "ddgst": ${ddgst:-false} 00:34:43.586 }, 00:34:43.586 "method": "bdev_nvme_attach_controller" 00:34:43.586 } 00:34:43.586 EOF 00:34:43.586 )") 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2359594 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:43.586 "params": { 00:34:43.586 "name": "Nvme1", 00:34:43.586 "trtype": "tcp", 00:34:43.586 "traddr": "10.0.0.2", 00:34:43.586 "adrfam": "ipv4", 00:34:43.586 "trsvcid": "4420", 00:34:43.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:43.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:43.586 "hdgst": false, 00:34:43.586 "ddgst": false 00:34:43.586 }, 00:34:43.586 "method": "bdev_nvme_attach_controller" 00:34:43.586 }' 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:43.586 "params": { 00:34:43.586 "name": "Nvme1", 00:34:43.586 "trtype": "tcp", 00:34:43.586 "traddr": "10.0.0.2", 00:34:43.586 "adrfam": "ipv4", 00:34:43.586 "trsvcid": "4420", 00:34:43.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:43.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:43.586 "hdgst": false, 00:34:43.586 "ddgst": false 00:34:43.586 }, 00:34:43.586 "method": "bdev_nvme_attach_controller" 00:34:43.586 }' 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:43.586 "params": { 00:34:43.586 "name": "Nvme1", 00:34:43.586 "trtype": "tcp", 00:34:43.586 "traddr": "10.0.0.2", 00:34:43.586 "adrfam": "ipv4", 00:34:43.586 "trsvcid": "4420", 00:34:43.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:43.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:43.586 "hdgst": false, 00:34:43.586 "ddgst": false 00:34:43.586 }, 00:34:43.586 "method": "bdev_nvme_attach_controller" 00:34:43.586 }' 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:43.586 21:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:43.586 "params": { 00:34:43.586 "name": "Nvme1", 00:34:43.586 "trtype": "tcp", 00:34:43.586 "traddr": "10.0.0.2", 00:34:43.586 "adrfam": "ipv4", 00:34:43.586 "trsvcid": "4420", 00:34:43.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:43.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:43.586 "hdgst": false, 00:34:43.586 "ddgst": false 00:34:43.586 }, 00:34:43.586 "method": "bdev_nvme_attach_controller" 00:34:43.586 }' 00:34:43.586 [2024-12-05 21:27:44.875457] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:34:43.586 [2024-12-05 21:27:44.875511] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:43.586 [2024-12-05 21:27:44.880363] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:34:43.586 [2024-12-05 21:27:44.880409] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:34:43.586 [2024-12-05 21:27:44.882699] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:34:43.586 [2024-12-05 21:27:44.882748] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:43.586 [2024-12-05 21:27:44.883843] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:34:43.586 [2024-12-05 21:27:44.883894] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:43.852 [2024-12-05 21:27:45.041751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.852 [2024-12-05 21:27:45.070838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:43.852 [2024-12-05 21:27:45.096057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.852 [2024-12-05 21:27:45.125248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:43.852 [2024-12-05 21:27:45.140933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.852 [2024-12-05 21:27:45.169443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:43.852 [2024-12-05 21:27:45.190281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.852 [2024-12-05 21:27:45.218434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:44.113 Running I/O for 1 seconds... 00:34:44.113 Running I/O for 1 seconds... 00:34:44.113 Running I/O for 1 seconds... 00:34:44.113 Running I/O for 1 seconds... 00:34:45.053 176920.00 IOPS, 691.09 MiB/s 00:34:45.053 Latency(us) 00:34:45.053 [2024-12-05T20:27:46.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.053 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:45.053 Nvme1n1 : 1.00 176559.34 689.68 0.00 0.00 720.78 305.49 2034.35 00:34:45.053 [2024-12-05T20:27:46.490Z] =================================================================================================================== 00:34:45.053 [2024-12-05T20:27:46.490Z] Total : 176559.34 689.68 0.00 0.00 720.78 305.49 2034.35 00:34:45.053 8859.00 IOPS, 34.61 MiB/s 00:34:45.053 Latency(us) 00:34:45.053 [2024-12-05T20:27:46.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.053 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:45.053 Nvme1n1 : 1.02 8836.04 34.52 0.00 0.00 14380.86 2184.53 24139.09 00:34:45.053 [2024-12-05T20:27:46.490Z] =================================================================================================================== 00:34:45.053 [2024-12-05T20:27:46.490Z] Total : 8836.04 34.52 0.00 0.00 14380.86 2184.53 24139.09 00:34:45.053 18841.00 IOPS, 73.60 MiB/s 00:34:45.053 Latency(us) 00:34:45.053 [2024-12-05T20:27:46.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.053 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:45.053 Nvme1n1 : 1.01 18873.74 73.73 0.00 0.00 6763.62 3181.23 10868.05 00:34:45.053 [2024-12-05T20:27:46.490Z] =================================================================================================================== 00:34:45.053 [2024-12-05T20:27:46.490Z] Total : 18873.74 73.73 0.00 0.00 6763.62 3181.23 10868.05 00:34:45.053 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2359596 00:34:45.314 9063.00 IOPS, 35.40 MiB/s [2024-12-05T20:27:46.751Z] 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2359598 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2359601 00:34:45.314 00:34:45.314 Latency(us) 00:34:45.314 [2024-12-05T20:27:46.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:45.314 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:45.314 Nvme1n1 : 1.01 9193.41 35.91 0.00 0.00 13890.46 3222.19 29928.11 00:34:45.314 [2024-12-05T20:27:46.751Z] =================================================================================================================== 00:34:45.314 [2024-12-05T20:27:46.751Z] Total : 9193.41 35.91 0.00 0.00 13890.46 3222.19 29928.11 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:45.314 rmmod nvme_tcp 00:34:45.314 rmmod nvme_fabrics 00:34:45.314 rmmod nvme_keyring 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2359243 ']' 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2359243 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2359243 ']' 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2359243 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2359243 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2359243' 00:34:45.314 killing process with pid 2359243 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2359243 00:34:45.314 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2359243 00:34:45.575 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:45.575 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:45.575 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:45.575 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:45.575 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:34:45.575 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:34:45.575 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:45.575 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:45.575 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:45.575 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.575 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:45.575 21:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.119 21:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:48.119 00:34:48.119 real 0m13.509s 00:34:48.119 user 0m15.489s 00:34:48.119 sys 0m7.806s 00:34:48.119 21:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:48.119 21:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:48.119 ************************************ 00:34:48.119 END TEST nvmf_bdev_io_wait 00:34:48.119 ************************************ 00:34:48.119 21:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:48.119 21:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:48.119 21:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:48.119 21:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:48.119 ************************************ 00:34:48.119 START TEST nvmf_queue_depth 00:34:48.119 ************************************ 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:48.119 * Looking for test storage... 00:34:48.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:48.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.119 --rc genhtml_branch_coverage=1 00:34:48.119 --rc genhtml_function_coverage=1 00:34:48.119 --rc genhtml_legend=1 00:34:48.119 --rc geninfo_all_blocks=1 00:34:48.119 --rc geninfo_unexecuted_blocks=1 00:34:48.119 00:34:48.119 ' 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:48.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.119 --rc genhtml_branch_coverage=1 00:34:48.119 --rc genhtml_function_coverage=1 00:34:48.119 --rc genhtml_legend=1 00:34:48.119 --rc geninfo_all_blocks=1 00:34:48.119 --rc geninfo_unexecuted_blocks=1 00:34:48.119 00:34:48.119 ' 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:48.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.119 --rc genhtml_branch_coverage=1 00:34:48.119 --rc genhtml_function_coverage=1 00:34:48.119 --rc genhtml_legend=1 00:34:48.119 --rc geninfo_all_blocks=1 00:34:48.119 --rc geninfo_unexecuted_blocks=1 00:34:48.119 00:34:48.119 ' 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:48.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.119 --rc genhtml_branch_coverage=1 00:34:48.119 --rc genhtml_function_coverage=1 00:34:48.119 --rc genhtml_legend=1 00:34:48.119 --rc geninfo_all_blocks=1 00:34:48.119 --rc geninfo_unexecuted_blocks=1 00:34:48.119 00:34:48.119 ' 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.119 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:34:48.120 21:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:56.263 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:56.263 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:56.263 Found net devices under 0000:31:00.0: cvl_0_0 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:56.263 Found net devices under 0000:31:00.1: cvl_0_1 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:56.263 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:56.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:56.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:34:56.264 00:34:56.264 --- 10.0.0.2 ping statistics --- 00:34:56.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.264 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:56.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:56.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:34:56.264 00:34:56.264 --- 10.0.0.1 ping statistics --- 00:34:56.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.264 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2364637 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2364637 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2364637 ']' 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.264 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:56.524 [2024-12-05 21:27:57.709734] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:56.524 [2024-12-05 21:27:57.710752] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:34:56.524 [2024-12-05 21:27:57.710791] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.524 [2024-12-05 21:27:57.815563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.524 [2024-12-05 21:27:57.849829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.524 [2024-12-05 21:27:57.849867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.524 [2024-12-05 21:27:57.849875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.524 [2024-12-05 21:27:57.849882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.524 [2024-12-05 21:27:57.849888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.524 [2024-12-05 21:27:57.850441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.524 [2024-12-05 21:27:57.906945] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:56.524 [2024-12-05 21:27:57.907191] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:56.524 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.524 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:56.524 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:56.524 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:56.524 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:56.784 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.784 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:56.784 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.784 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:56.784 [2024-12-05 21:27:57.975212] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.784 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.784 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:56.784 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.784 21:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:56.784 Malloc0 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:56.784 [2024-12-05 21:27:58.039368] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2364657 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2364657 /var/tmp/bdevperf.sock 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2364657 ']' 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:56.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:56.784 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:56.784 [2024-12-05 21:27:58.095561] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:34:56.784 [2024-12-05 21:27:58.095627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364657 ] 00:34:56.784 [2024-12-05 21:27:58.178418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.044 [2024-12-05 21:27:58.219721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.614 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:57.614 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:57.614 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:57.614 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.614 21:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:57.874 NVMe0n1 00:34:57.874 21:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.874 21:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:57.874 Running I/O for 10 seconds... 00:34:59.761 8906.00 IOPS, 34.79 MiB/s [2024-12-05T20:28:02.584Z] 9201.50 IOPS, 35.94 MiB/s [2024-12-05T20:28:03.529Z] 9219.00 IOPS, 36.01 MiB/s [2024-12-05T20:28:04.473Z] 9828.00 IOPS, 38.39 MiB/s [2024-12-05T20:28:05.416Z] 10245.80 IOPS, 40.02 MiB/s [2024-12-05T20:28:06.356Z] 10577.83 IOPS, 41.32 MiB/s [2024-12-05T20:28:07.298Z] 10775.29 IOPS, 42.09 MiB/s [2024-12-05T20:28:08.241Z] 10880.12 IOPS, 42.50 MiB/s [2024-12-05T20:28:09.629Z] 11034.56 IOPS, 43.10 MiB/s [2024-12-05T20:28:09.629Z] 11130.90 IOPS, 43.48 MiB/s 00:35:08.192 Latency(us) 00:35:08.192 [2024-12-05T20:28:09.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.192 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:35:08.192 Verification LBA range: start 0x0 length 0x4000 00:35:08.192 NVMe0n1 : 10.06 11160.96 43.60 0.00 0.00 91386.84 19879.25 91313.49 00:35:08.192 [2024-12-05T20:28:09.629Z] =================================================================================================================== 00:35:08.192 [2024-12-05T20:28:09.629Z] Total : 11160.96 43.60 0.00 0.00 91386.84 19879.25 91313.49 00:35:08.192 { 00:35:08.192 "results": [ 00:35:08.192 { 00:35:08.192 "job": "NVMe0n1", 00:35:08.192 "core_mask": "0x1", 00:35:08.192 "workload": "verify", 00:35:08.192 "status": "finished", 00:35:08.192 "verify_range": { 00:35:08.192 "start": 0, 00:35:08.192 "length": 16384 00:35:08.192 }, 00:35:08.192 "queue_depth": 1024, 00:35:08.192 "io_size": 4096, 00:35:08.192 "runtime": 10.057199, 00:35:08.192 "iops": 11160.96042247946, 00:35:08.192 "mibps": 43.59750165031039, 00:35:08.192 "io_failed": 0, 00:35:08.192 "io_timeout": 0, 00:35:08.192 "avg_latency_us": 91386.84241465326, 00:35:08.192 "min_latency_us": 19879.253333333334, 00:35:08.192 "max_latency_us": 91313.49333333333 00:35:08.192 } 00:35:08.192 ], 00:35:08.192 "core_count": 1 00:35:08.192 } 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2364657 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2364657 ']' 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2364657 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2364657 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2364657' 00:35:08.192 killing process with pid 2364657 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2364657 00:35:08.192 Received shutdown signal, test time was about 10.000000 seconds 00:35:08.192 00:35:08.192 Latency(us) 00:35:08.192 [2024-12-05T20:28:09.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.192 [2024-12-05T20:28:09.629Z] =================================================================================================================== 00:35:08.192 [2024-12-05T20:28:09.629Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2364657 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:35:08.192 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:08.193 rmmod nvme_tcp 00:35:08.193 rmmod nvme_fabrics 00:35:08.193 rmmod nvme_keyring 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2364637 ']' 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2364637 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2364637 ']' 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2364637 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2364637 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2364637' 00:35:08.193 killing process with pid 2364637 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2364637 00:35:08.193 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2364637 00:35:08.455 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:08.455 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:08.455 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:08.455 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:35:08.455 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:35:08.455 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:08.455 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:35:08.455 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:08.455 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:08.455 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.455 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:08.455 21:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.371 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:10.371 00:35:10.371 real 0m22.755s 00:35:10.371 user 0m24.851s 00:35:10.371 sys 0m7.925s 00:35:10.371 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:10.371 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:10.371 ************************************ 00:35:10.371 END TEST nvmf_queue_depth 00:35:10.371 ************************************ 00:35:10.371 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:10.371 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:10.371 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:10.371 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:10.632 ************************************ 00:35:10.632 START TEST nvmf_target_multipath 00:35:10.632 ************************************ 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:10.632 * Looking for test storage... 00:35:10.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:10.632 21:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:10.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.632 --rc genhtml_branch_coverage=1 00:35:10.632 --rc genhtml_function_coverage=1 00:35:10.632 --rc genhtml_legend=1 00:35:10.632 --rc geninfo_all_blocks=1 00:35:10.632 --rc geninfo_unexecuted_blocks=1 00:35:10.632 00:35:10.632 ' 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:10.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.632 --rc genhtml_branch_coverage=1 00:35:10.632 --rc genhtml_function_coverage=1 00:35:10.632 --rc genhtml_legend=1 00:35:10.632 --rc geninfo_all_blocks=1 00:35:10.632 --rc geninfo_unexecuted_blocks=1 00:35:10.632 00:35:10.632 ' 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:10.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.632 --rc genhtml_branch_coverage=1 00:35:10.632 --rc genhtml_function_coverage=1 00:35:10.632 --rc genhtml_legend=1 00:35:10.632 --rc geninfo_all_blocks=1 00:35:10.632 --rc geninfo_unexecuted_blocks=1 00:35:10.632 00:35:10.632 ' 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:10.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.632 --rc genhtml_branch_coverage=1 00:35:10.632 --rc genhtml_function_coverage=1 00:35:10.632 --rc genhtml_legend=1 00:35:10.632 --rc geninfo_all_blocks=1 00:35:10.632 --rc geninfo_unexecuted_blocks=1 00:35:10.632 00:35:10.632 ' 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:10.632 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:35:10.633 21:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:35:18.780 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:18.781 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:18.781 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:18.781 Found net devices under 0000:31:00.0: cvl_0_0 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:18.781 Found net devices under 0000:31:00.1: cvl_0_1 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:18.781 21:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:18.781 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:18.781 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:18.781 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:18.781 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:19.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:19.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:35:19.043 00:35:19.043 --- 10.0.0.2 ping statistics --- 00:35:19.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.043 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:19.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:19.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:35:19.043 00:35:19.043 --- 10.0.0.1 ping statistics --- 00:35:19.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.043 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:35:19.043 only one NIC for nvmf test 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:19.043 rmmod nvme_tcp 00:35:19.043 rmmod nvme_fabrics 00:35:19.043 rmmod nvme_keyring 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.043 21:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:21.585 00:35:21.585 real 0m10.689s 00:35:21.585 user 0m2.330s 00:35:21.585 sys 0m6.302s 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:21.585 ************************************ 00:35:21.585 END TEST nvmf_target_multipath 00:35:21.585 ************************************ 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:21.585 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:21.586 ************************************ 00:35:21.586 START TEST nvmf_zcopy 00:35:21.586 ************************************ 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:21.586 * Looking for test storage... 00:35:21.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:21.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.586 --rc genhtml_branch_coverage=1 00:35:21.586 --rc genhtml_function_coverage=1 00:35:21.586 --rc genhtml_legend=1 00:35:21.586 --rc geninfo_all_blocks=1 00:35:21.586 --rc geninfo_unexecuted_blocks=1 00:35:21.586 00:35:21.586 ' 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:21.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.586 --rc genhtml_branch_coverage=1 00:35:21.586 --rc genhtml_function_coverage=1 00:35:21.586 --rc genhtml_legend=1 00:35:21.586 --rc geninfo_all_blocks=1 00:35:21.586 --rc geninfo_unexecuted_blocks=1 00:35:21.586 00:35:21.586 ' 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:21.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.586 --rc genhtml_branch_coverage=1 00:35:21.586 --rc genhtml_function_coverage=1 00:35:21.586 --rc genhtml_legend=1 00:35:21.586 --rc geninfo_all_blocks=1 00:35:21.586 --rc geninfo_unexecuted_blocks=1 00:35:21.586 00:35:21.586 ' 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:21.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.586 --rc genhtml_branch_coverage=1 00:35:21.586 --rc genhtml_function_coverage=1 00:35:21.586 --rc genhtml_legend=1 00:35:21.586 --rc geninfo_all_blocks=1 00:35:21.586 --rc geninfo_unexecuted_blocks=1 00:35:21.586 00:35:21.586 ' 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:35:21.586 21:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:29.738 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:29.739 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:29.740 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:29.740 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:29.740 Found net devices under 0000:31:00.0: cvl_0_0 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:29.740 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:29.741 Found net devices under 0000:31:00.1: cvl_0_1 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:29.741 21:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:29.741 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:29.741 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:29.741 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:29.741 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:29.741 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:29.741 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:29.741 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:29.741 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:29.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:29.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:35:29.741 00:35:29.741 --- 10.0.0.2 ping statistics --- 00:35:29.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.741 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:35:29.741 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:29.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:29.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:35:29.741 00:35:29.741 --- 10.0.0.1 ping statistics --- 00:35:29.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.741 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:35:30.003 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:30.003 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2376121 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2376121 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2376121 ']' 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:30.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:30.004 21:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:30.004 [2024-12-05 21:28:31.291475] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:30.004 [2024-12-05 21:28:31.292887] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:35:30.004 [2024-12-05 21:28:31.292940] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:30.004 [2024-12-05 21:28:31.405162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.263 [2024-12-05 21:28:31.455975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:30.263 [2024-12-05 21:28:31.456029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:30.263 [2024-12-05 21:28:31.456038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:30.263 [2024-12-05 21:28:31.456045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:30.263 [2024-12-05 21:28:31.456052] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:30.263 [2024-12-05 21:28:31.456814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:30.263 [2024-12-05 21:28:31.535205] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:30.263 [2024-12-05 21:28:31.535472] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:30.833 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:30.833 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:35:30.833 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:30.834 [2024-12-05 21:28:32.133656] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:30.834 [2024-12-05 21:28:32.161918] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:30.834 malloc0 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:30.834 { 00:35:30.834 "params": { 00:35:30.834 "name": "Nvme$subsystem", 00:35:30.834 "trtype": "$TEST_TRANSPORT", 00:35:30.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.834 "adrfam": "ipv4", 00:35:30.834 "trsvcid": "$NVMF_PORT", 00:35:30.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.834 "hdgst": ${hdgst:-false}, 00:35:30.834 "ddgst": ${ddgst:-false} 00:35:30.834 }, 00:35:30.834 "method": "bdev_nvme_attach_controller" 00:35:30.834 } 00:35:30.834 EOF 00:35:30.834 )") 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:30.834 21:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:30.834 "params": { 00:35:30.834 "name": "Nvme1", 00:35:30.834 "trtype": "tcp", 00:35:30.834 "traddr": "10.0.0.2", 00:35:30.834 "adrfam": "ipv4", 00:35:30.834 "trsvcid": "4420", 00:35:30.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:30.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:30.834 "hdgst": false, 00:35:30.834 "ddgst": false 00:35:30.834 }, 00:35:30.834 "method": "bdev_nvme_attach_controller" 00:35:30.834 }' 00:35:31.094 [2024-12-05 21:28:32.270737] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:35:31.094 [2024-12-05 21:28:32.270802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376371 ] 00:35:31.094 [2024-12-05 21:28:32.354500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.094 [2024-12-05 21:28:32.394670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.363 Running I/O for 10 seconds... 00:35:33.346 6650.00 IOPS, 51.95 MiB/s [2024-12-05T20:28:35.722Z] 6706.50 IOPS, 52.39 MiB/s [2024-12-05T20:28:37.102Z] 6713.00 IOPS, 52.45 MiB/s [2024-12-05T20:28:38.041Z] 6726.75 IOPS, 52.55 MiB/s [2024-12-05T20:28:38.982Z] 6734.60 IOPS, 52.61 MiB/s [2024-12-05T20:28:39.923Z] 7146.17 IOPS, 55.83 MiB/s [2024-12-05T20:28:40.866Z] 7517.86 IOPS, 58.73 MiB/s [2024-12-05T20:28:41.808Z] 7796.00 IOPS, 60.91 MiB/s [2024-12-05T20:28:42.750Z] 8015.56 IOPS, 62.62 MiB/s [2024-12-05T20:28:42.750Z] 8188.70 IOPS, 63.97 MiB/s 00:35:41.313 Latency(us) 00:35:41.313 [2024-12-05T20:28:42.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:41.313 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:35:41.313 Verification LBA range: start 0x0 length 0x1000 00:35:41.313 Nvme1n1 : 10.01 8193.02 64.01 0.00 0.00 15570.95 1433.60 25995.95 00:35:41.313 [2024-12-05T20:28:42.750Z] =================================================================================================================== 00:35:41.313 [2024-12-05T20:28:42.750Z] Total : 8193.02 64.01 0.00 0.00 15570.95 1433.60 25995.95 00:35:41.574 21:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2378375 00:35:41.574 21:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:35:41.574 21:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:41.574 21:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:35:41.574 21:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:35:41.574 21:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:41.574 21:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:41.574 21:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:41.574 21:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:41.574 { 00:35:41.574 "params": { 00:35:41.574 "name": "Nvme$subsystem", 00:35:41.574 "trtype": "$TEST_TRANSPORT", 00:35:41.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:41.574 "adrfam": "ipv4", 00:35:41.574 "trsvcid": "$NVMF_PORT", 00:35:41.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:41.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:41.574 "hdgst": ${hdgst:-false}, 00:35:41.574 "ddgst": ${ddgst:-false} 00:35:41.574 }, 00:35:41.574 "method": "bdev_nvme_attach_controller" 00:35:41.574 } 00:35:41.574 EOF 00:35:41.574 )") 00:35:41.574 [2024-12-05 21:28:42.813244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.574 [2024-12-05 21:28:42.813276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.574 21:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:41.574 21:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:41.574 21:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:41.574 21:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:41.574 "params": { 00:35:41.574 "name": "Nvme1", 00:35:41.574 "trtype": "tcp", 00:35:41.574 "traddr": "10.0.0.2", 00:35:41.574 "adrfam": "ipv4", 00:35:41.574 "trsvcid": "4420", 00:35:41.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:41.574 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:41.574 "hdgst": false, 00:35:41.574 "ddgst": false 00:35:41.574 }, 00:35:41.574 "method": "bdev_nvme_attach_controller" 00:35:41.574 }' 00:35:41.574 [2024-12-05 21:28:42.825206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.574 [2024-12-05 21:28:42.825215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.574 [2024-12-05 21:28:42.837205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.574 [2024-12-05 21:28:42.837214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.574 [2024-12-05 21:28:42.849205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.574 [2024-12-05 21:28:42.849213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.574 [2024-12-05 21:28:42.858589] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:35:41.574 [2024-12-05 21:28:42.858637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378375 ] 00:35:41.574 [2024-12-05 21:28:42.861205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.574 [2024-12-05 21:28:42.861214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.574 [2024-12-05 21:28:42.873204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.575 [2024-12-05 21:28:42.873212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.575 [2024-12-05 21:28:42.885204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.575 [2024-12-05 21:28:42.885211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.575 [2024-12-05 21:28:42.897205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.575 [2024-12-05 21:28:42.897212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.575 [2024-12-05 21:28:42.909204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.575 [2024-12-05 21:28:42.909211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.575 [2024-12-05 21:28:42.921204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.575 [2024-12-05 21:28:42.921211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.575 [2024-12-05 21:28:42.933205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.575 [2024-12-05 21:28:42.933211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.575 [2024-12-05 21:28:42.935202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.575 [2024-12-05 21:28:42.945207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.575 [2024-12-05 21:28:42.945218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.575 [2024-12-05 21:28:42.957205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.575 [2024-12-05 21:28:42.957213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.575 [2024-12-05 21:28:42.969206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.575 [2024-12-05 21:28:42.969215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.575 [2024-12-05 21:28:42.970297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.575 [2024-12-05 21:28:42.981207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.575 [2024-12-05 21:28:42.981215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.575 [2024-12-05 21:28:42.993209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.575 [2024-12-05 21:28:42.993222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.575 [2024-12-05 21:28:43.005206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.575 [2024-12-05 21:28:43.005218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.017205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.017214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.029205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.029212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.041213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.041226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.053208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.053220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.065207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.065216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.077206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.077215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.089205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.089212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.101204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.101211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.113205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.113214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.125207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.125218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.137210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.137222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.180433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.180445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.189207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.189217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 Running I/O for 5 seconds... 00:35:41.836 [2024-12-05 21:28:43.203944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.203960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.217097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.217113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.229860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.836 [2024-12-05 21:28:43.229879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.836 [2024-12-05 21:28:43.243963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.837 [2024-12-05 21:28:43.243979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.837 [2024-12-05 21:28:43.256713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.837 [2024-12-05 21:28:43.256728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.837 [2024-12-05 21:28:43.269760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.837 [2024-12-05 21:28:43.269775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.097 [2024-12-05 21:28:43.284411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.284427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.297138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.297154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.309416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.309431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.322142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.322156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.336268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.336283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.348690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.348705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.360865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.360880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.373274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.373289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.385743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.385757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.399813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.399829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.412783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.412799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.425544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.425558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.440089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.440104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.452784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.452799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.465610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.465624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.480013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.480029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.492856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.492876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.505256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.505271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.517892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.517906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.098 [2024-12-05 21:28:43.532483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.098 [2024-12-05 21:28:43.532498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.545292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.545307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.557823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.557837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.571883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.571897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.584580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.584594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.596760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.596775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.609372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.609386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.621498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.621512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.636178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.636193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.649003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.649018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.661786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.661800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.676453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.676468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.689148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.689163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.702133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.702147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.716628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.716642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.729299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.729314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.741818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.741833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.756222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.756237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.768692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.768708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.781147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.781162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.360 [2024-12-05 21:28:43.794017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.360 [2024-12-05 21:28:43.794033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:43.808246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:43.808261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:43.821190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:43.821205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:43.834101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:43.834116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:43.848076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:43.848091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:43.860753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:43.860768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:43.873124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:43.873140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:43.885618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:43.885632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:43.900180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:43.900196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:43.913071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:43.913086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:43.925618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:43.925633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:43.940397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:43.940412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:43.953192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:43.953207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:43.965923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:43.965937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:43.980094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:43.980109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:43.992958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:43.992973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:44.005160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:44.005175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:44.017577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:44.017591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:44.032087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:44.032102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.622 [2024-12-05 21:28:44.044839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.622 [2024-12-05 21:28:44.044854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.883 [2024-12-05 21:28:44.057687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.883 [2024-12-05 21:28:44.057702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.883 [2024-12-05 21:28:44.072516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.883 [2024-12-05 21:28:44.072531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.883 [2024-12-05 21:28:44.085206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.883 [2024-12-05 21:28:44.085221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.883 [2024-12-05 21:28:44.097569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.883 [2024-12-05 21:28:44.097583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.883 [2024-12-05 21:28:44.112127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.883 [2024-12-05 21:28:44.112142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.883 [2024-12-05 21:28:44.125127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.883 [2024-12-05 21:28:44.125143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.883 [2024-12-05 21:28:44.137855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.883 [2024-12-05 21:28:44.137874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.883 [2024-12-05 21:28:44.152178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.883 [2024-12-05 21:28:44.152194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.883 [2024-12-05 21:28:44.164800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.883 [2024-12-05 21:28:44.164815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.884 [2024-12-05 21:28:44.176991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.884 [2024-12-05 21:28:44.177006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.884 [2024-12-05 21:28:44.189878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.884 [2024-12-05 21:28:44.189897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.884 19465.00 IOPS, 152.07 MiB/s [2024-12-05T20:28:44.321Z] [2024-12-05 21:28:44.204267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.884 [2024-12-05 21:28:44.204282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.884 [2024-12-05 21:28:44.217070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.884 [2024-12-05 21:28:44.217084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.884 [2024-12-05 21:28:44.229545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.884 [2024-12-05 21:28:44.229559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.884 [2024-12-05 21:28:44.244461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.884 [2024-12-05 21:28:44.244475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.884 [2024-12-05 21:28:44.257190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.884 [2024-12-05 21:28:44.257205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.884 [2024-12-05 21:28:44.269970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.884 [2024-12-05 21:28:44.269984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.884 [2024-12-05 21:28:44.284695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.884 [2024-12-05 21:28:44.284709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.884 [2024-12-05 21:28:44.297408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.884 [2024-12-05 21:28:44.297422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:42.884 [2024-12-05 21:28:44.310183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:42.884 [2024-12-05 21:28:44.310198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.324319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.324334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.336816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.336831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.349605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.349619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.364439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.364453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.377370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.377385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.389796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.389810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.404490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.404505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.417401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.417416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.429620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.429634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.444685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.444703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.457086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.457100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.469696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.469710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.484205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.484219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.497154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.497169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.509396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.509410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.522068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.522083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.536709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.536724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.549272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.549286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.561914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.561928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.145 [2024-12-05 21:28:44.575982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.145 [2024-12-05 21:28:44.575996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.588371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.588386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.601040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.601055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.613374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.613389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.625869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.625883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.640316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.640331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.652977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.652992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.665779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.665793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.680226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.680240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.693093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.693111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.705395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.705409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.717826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.717840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.732332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.732347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.744904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.744918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.757754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.757769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.772361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.772377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.785005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.785020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.797688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.797702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.812163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.812178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.824917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.824932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.406 [2024-12-05 21:28:44.837243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.406 [2024-12-05 21:28:44.837258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.667 [2024-12-05 21:28:44.850036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.667 [2024-12-05 21:28:44.850050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.667 [2024-12-05 21:28:44.864427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.667 [2024-12-05 21:28:44.864442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.667 [2024-12-05 21:28:44.877340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.667 [2024-12-05 21:28:44.877354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.667 [2024-12-05 21:28:44.889667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.667 [2024-12-05 21:28:44.889681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.667 [2024-12-05 21:28:44.904273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.667 [2024-12-05 21:28:44.904288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.667 [2024-12-05 21:28:44.916900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.667 [2024-12-05 21:28:44.916915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.667 [2024-12-05 21:28:44.929060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.667 [2024-12-05 21:28:44.929075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.667 [2024-12-05 21:28:44.941538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.667 [2024-12-05 21:28:44.941552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.667 [2024-12-05 21:28:44.956415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.667 [2024-12-05 21:28:44.956430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.667 [2024-12-05 21:28:44.969040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.667 [2024-12-05 21:28:44.969054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.667 [2024-12-05 21:28:44.981485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.667 [2024-12-05 21:28:44.981499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.667 [2024-12-05 21:28:44.995567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.667 [2024-12-05 21:28:44.995582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.667 [2024-12-05 21:28:45.008626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.667 [2024-12-05 21:28:45.008641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.668 [2024-12-05 21:28:45.021371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.668 [2024-12-05 21:28:45.021386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.668 [2024-12-05 21:28:45.034401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.668 [2024-12-05 21:28:45.034415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.668 [2024-12-05 21:28:45.048153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.668 [2024-12-05 21:28:45.048167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.668 [2024-12-05 21:28:45.060884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.668 [2024-12-05 21:28:45.060899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.668 [2024-12-05 21:28:45.073474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.668 [2024-12-05 21:28:45.073488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.668 [2024-12-05 21:28:45.088167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.668 [2024-12-05 21:28:45.088182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.668 [2024-12-05 21:28:45.100849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.668 [2024-12-05 21:28:45.100868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.928 [2024-12-05 21:28:45.113475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.928 [2024-12-05 21:28:45.113490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.928 [2024-12-05 21:28:45.128058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.928 [2024-12-05 21:28:45.128072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.928 [2024-12-05 21:28:45.140843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.928 [2024-12-05 21:28:45.140858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.928 [2024-12-05 21:28:45.153392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.928 [2024-12-05 21:28:45.153407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.928 [2024-12-05 21:28:45.166300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.928 [2024-12-05 21:28:45.166316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.928 [2024-12-05 21:28:45.180858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.928 [2024-12-05 21:28:45.180880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.928 [2024-12-05 21:28:45.193638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.928 [2024-12-05 21:28:45.193652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.928 19523.50 IOPS, 152.53 MiB/s [2024-12-05T20:28:45.365Z] [2024-12-05 21:28:45.208118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.928 [2024-12-05 21:28:45.208133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.928 [2024-12-05 21:28:45.220886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.928 [2024-12-05 21:28:45.220901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.928 [2024-12-05 21:28:45.233688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.928 [2024-12-05 21:28:45.233702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.928 [2024-12-05 21:28:45.248223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.928 [2024-12-05 21:28:45.248238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.928 [2024-12-05 21:28:45.261021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.928 [2024-12-05 21:28:45.261037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.928 [2024-12-05 21:28:45.273664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.929 [2024-12-05 21:28:45.273678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.929 [2024-12-05 21:28:45.288145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.929 [2024-12-05 21:28:45.288160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.929 [2024-12-05 21:28:45.301024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.929 [2024-12-05 21:28:45.301039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.929 [2024-12-05 21:28:45.313762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.929 [2024-12-05 21:28:45.313777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.929 [2024-12-05 21:28:45.327938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.929 [2024-12-05 21:28:45.327954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.929 [2024-12-05 21:28:45.340656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.929 [2024-12-05 21:28:45.340670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:43.929 [2024-12-05 21:28:45.353207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:43.929 [2024-12-05 21:28:45.353221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.365679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.365693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.380175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.380190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.392908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.392924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.405231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.405246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.417912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.417927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.432609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.432628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.445602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.445616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.460471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.460486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.473367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.473382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.485881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.485897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.500067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.500082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.512572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.512587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.525203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.525217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.537646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.537660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.552469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.552484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.565091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.565106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.577396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.577411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.589904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.589918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.604019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.604034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.189 [2024-12-05 21:28:45.616838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.189 [2024-12-05 21:28:45.616853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.629379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.629395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.641786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.641800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.656016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.656031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.668765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.668781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.681425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.681445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.694058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.694073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.708525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.708541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.721560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.721574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.736571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.736586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.749078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.749093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.761708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.761722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.776210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.776225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.788866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.788881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.801426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.801441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.814304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.814318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.828177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.828192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.840679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.840694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.853637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.853652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.867944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.867959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.450 [2024-12-05 21:28:45.880558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.450 [2024-12-05 21:28:45.880573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.711 [2024-12-05 21:28:45.893174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:45.893190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:45.905629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:45.905643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:45.920224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:45.920238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:45.933039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:45.933058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:45.945852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:45.945872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:45.960313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:45.960329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:45.973534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:45.973548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:45.988452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:45.988466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:46.001292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:46.001307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:46.013801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:46.013815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:46.027778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:46.027793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:46.040748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:46.040763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:46.053678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:46.053692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:46.067798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:46.067813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:46.080404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:46.080418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:46.092741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:46.092756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:46.105144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:46.105159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:46.117387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:46.117401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:46.130153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:46.130167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.712 [2024-12-05 21:28:46.144624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.712 [2024-12-05 21:28:46.144638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.157626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.157640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.172318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.172333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.185136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.185156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.197968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.197983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 19531.33 IOPS, 152.59 MiB/s [2024-12-05T20:28:46.410Z] [2024-12-05 21:28:46.212054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.212069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.224603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.224618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.237204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.237219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.249952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.249966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.264019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.264034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.276864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.276879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.289178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.289193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.301381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.301396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.313898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.313913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.327990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.328005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.340685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.340700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.353110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.353124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.365703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.365717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.380207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.380222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.392969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.392984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:44.973 [2024-12-05 21:28:46.405355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:44.973 [2024-12-05 21:28:46.405370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.417888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.417902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.432447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.432461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.445088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.445102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.457328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.457343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.469806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.469820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.484693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.484707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.497192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.497207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.509437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.509452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.521696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.521710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.535731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.535746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.548539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.548553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.561268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.561283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.573898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.573912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.588188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.588203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.600979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.600994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.613217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.613232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.625882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.625896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.640501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.640515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.653069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.653084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.234 [2024-12-05 21:28:46.665340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.234 [2024-12-05 21:28:46.665354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.494 [2024-12-05 21:28:46.677872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.494 [2024-12-05 21:28:46.677886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.494 [2024-12-05 21:28:46.692169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.494 [2024-12-05 21:28:46.692183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.494 [2024-12-05 21:28:46.705007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.494 [2024-12-05 21:28:46.705022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.494 [2024-12-05 21:28:46.717565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.494 [2024-12-05 21:28:46.717579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.494 [2024-12-05 21:28:46.732464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.494 [2024-12-05 21:28:46.732478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.494 [2024-12-05 21:28:46.745104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.494 [2024-12-05 21:28:46.745118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.494 [2024-12-05 21:28:46.757325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.494 [2024-12-05 21:28:46.757340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.494 [2024-12-05 21:28:46.769797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.494 [2024-12-05 21:28:46.769810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.494 [2024-12-05 21:28:46.783931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.494 [2024-12-05 21:28:46.783945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.494 [2024-12-05 21:28:46.796562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.494 [2024-12-05 21:28:46.796577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.494 [2024-12-05 21:28:46.809202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.495 [2024-12-05 21:28:46.809216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.495 [2024-12-05 21:28:46.821483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.495 [2024-12-05 21:28:46.821496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.495 [2024-12-05 21:28:46.836052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.495 [2024-12-05 21:28:46.836066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.495 [2024-12-05 21:28:46.848681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.495 [2024-12-05 21:28:46.848695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.495 [2024-12-05 21:28:46.861373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.495 [2024-12-05 21:28:46.861387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.495 [2024-12-05 21:28:46.873576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.495 [2024-12-05 21:28:46.873590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.495 [2024-12-05 21:28:46.888046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.495 [2024-12-05 21:28:46.888061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.495 [2024-12-05 21:28:46.900816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.495 [2024-12-05 21:28:46.900832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.495 [2024-12-05 21:28:46.913225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.495 [2024-12-05 21:28:46.913240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.495 [2024-12-05 21:28:46.925570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.495 [2024-12-05 21:28:46.925584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:46.940177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:46.940193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:46.952962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:46.952978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:46.965758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:46.965773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:46.980350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:46.980366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:46.992951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:46.992967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:47.005247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:47.005262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:47.017784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:47.017799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:47.032118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:47.032133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:47.044664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:47.044679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:47.057531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:47.057545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:47.072048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:47.072063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:47.084573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:47.084588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:47.097639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:47.097653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:47.112784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:47.112799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:47.125449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:47.125464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:47.137057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:47.137071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:47.149809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:47.149823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:47.164430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:47.164449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:47.177286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:47.177301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:45.756 [2024-12-05 21:28:47.189620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:45.756 [2024-12-05 21:28:47.189634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 [2024-12-05 21:28:47.203812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.203828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 19566.00 IOPS, 152.86 MiB/s [2024-12-05T20:28:47.453Z] [2024-12-05 21:28:47.216482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.216496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 [2024-12-05 21:28:47.229065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.229079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 [2024-12-05 21:28:47.241785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.241799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 [2024-12-05 21:28:47.256499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.256514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 [2024-12-05 21:28:47.269544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.269558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 [2024-12-05 21:28:47.284203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.284217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 [2024-12-05 21:28:47.296938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.296954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 [2024-12-05 21:28:47.309472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.309485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 [2024-12-05 21:28:47.324360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.324375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 [2024-12-05 21:28:47.337029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.337044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 [2024-12-05 21:28:47.349530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.349544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 [2024-12-05 21:28:47.364286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.364301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 [2024-12-05 21:28:47.377088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.377102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 [2024-12-05 21:28:47.389542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.389557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.016 [2024-12-05 21:28:47.404100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.016 [2024-12-05 21:28:47.404114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.017 [2024-12-05 21:28:47.416953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.017 [2024-12-05 21:28:47.416971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.017 [2024-12-05 21:28:47.428943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.017 [2024-12-05 21:28:47.428957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.017 [2024-12-05 21:28:47.441460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.017 [2024-12-05 21:28:47.441474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.456051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.456066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.468647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.468662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.481355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.481369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.493804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.493819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.507839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.507854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.520468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.520482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.533094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.533109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.545753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.545767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.559971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.559986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.572614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.572629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.585610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.585624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.600135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.600150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.612662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.612677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.625439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.625454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.637909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.637923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.652239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.652255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.664789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.664807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.677155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.677169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.689564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.689578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.277 [2024-12-05 21:28:47.704528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.277 [2024-12-05 21:28:47.704543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.717348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.717363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.729661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.729675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.744174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.744189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.756905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.756919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.769405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.769419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.782236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.782250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.796138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.796153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.808969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.808983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.821578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.821591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.836228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.836243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.849195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.849209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.861802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.861817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.876326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.876340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.888990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.889004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.901856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.901875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.916511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.916526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.929301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.929315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.941899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.941913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.956128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.956143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.538 [2024-12-05 21:28:47.968935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.538 [2024-12-05 21:28:47.968950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.798 [2024-12-05 21:28:47.981278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.798 [2024-12-05 21:28:47.981292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.798 [2024-12-05 21:28:47.994079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.798 [2024-12-05 21:28:47.994093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.798 [2024-12-05 21:28:48.008048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.798 [2024-12-05 21:28:48.008064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.798 [2024-12-05 21:28:48.020591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.798 [2024-12-05 21:28:48.020606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.798 [2024-12-05 21:28:48.033744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.798 [2024-12-05 21:28:48.033758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.798 [2024-12-05 21:28:48.048351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.798 [2024-12-05 21:28:48.048365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.798 [2024-12-05 21:28:48.061149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.798 [2024-12-05 21:28:48.061164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.798 [2024-12-05 21:28:48.073438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.798 [2024-12-05 21:28:48.073452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.798 [2024-12-05 21:28:48.085627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.798 [2024-12-05 21:28:48.085641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.798 [2024-12-05 21:28:48.100311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.799 [2024-12-05 21:28:48.100325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.799 [2024-12-05 21:28:48.112923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.799 [2024-12-05 21:28:48.112937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.799 [2024-12-05 21:28:48.125939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.799 [2024-12-05 21:28:48.125953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.799 [2024-12-05 21:28:48.140270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.799 [2024-12-05 21:28:48.140285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.799 [2024-12-05 21:28:48.153179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.799 [2024-12-05 21:28:48.153194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.799 [2024-12-05 21:28:48.165741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.799 [2024-12-05 21:28:48.165755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.799 [2024-12-05 21:28:48.180128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.799 [2024-12-05 21:28:48.180142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.799 [2024-12-05 21:28:48.192964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.799 [2024-12-05 21:28:48.192978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.799 [2024-12-05 21:28:48.205286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.799 [2024-12-05 21:28:48.205300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.799 19567.40 IOPS, 152.87 MiB/s [2024-12-05T20:28:48.236Z] [2024-12-05 21:28:48.213212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.799 [2024-12-05 21:28:48.213225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:46.799 00:35:46.799 Latency(us) 00:35:46.799 [2024-12-05T20:28:48.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.799 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:35:46.799 Nvme1n1 : 5.01 19570.17 152.89 0.00 0.00 6534.67 2307.41 11687.25 00:35:46.799 [2024-12-05T20:28:48.236Z] =================================================================================================================== 00:35:46.799 [2024-12-05T20:28:48.236Z] Total : 19570.17 152.89 0.00 0.00 6534.67 2307.41 11687.25 00:35:46.799 [2024-12-05 21:28:48.225208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:46.799 [2024-12-05 21:28:48.225221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.059 [2024-12-05 21:28:48.237213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.059 [2024-12-05 21:28:48.237224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.059 [2024-12-05 21:28:48.249209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.059 [2024-12-05 21:28:48.249220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.059 [2024-12-05 21:28:48.261208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.059 [2024-12-05 21:28:48.261218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.059 [2024-12-05 21:28:48.273206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.059 [2024-12-05 21:28:48.273215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.059 [2024-12-05 21:28:48.285205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.059 [2024-12-05 21:28:48.285214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.059 [2024-12-05 21:28:48.297205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.059 [2024-12-05 21:28:48.297212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.059 [2024-12-05 21:28:48.309207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.059 [2024-12-05 21:28:48.309216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.059 [2024-12-05 21:28:48.321205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:47.059 [2024-12-05 21:28:48.321212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:47.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2378375) - No such process 00:35:47.059 21:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2378375 00:35:47.059 21:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:47.059 21:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.059 21:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:47.059 21:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.059 21:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:47.059 21:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.059 21:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:47.059 delay0 00:35:47.059 21:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.059 21:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:35:47.059 21:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.059 21:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:47.059 21:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.059 21:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:35:47.319 [2024-12-05 21:28:48.509049] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:53.898 Initializing NVMe Controllers 00:35:53.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:53.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:53.898 Initialization complete. Launching workers. 00:35:53.898 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3505 00:35:53.898 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3791, failed to submit 34 00:35:53.898 success 3641, unsuccessful 150, failed 0 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:53.898 rmmod nvme_tcp 00:35:53.898 rmmod nvme_fabrics 00:35:53.898 rmmod nvme_keyring 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2376121 ']' 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2376121 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2376121 ']' 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2376121 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2376121 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2376121' 00:35:53.898 killing process with pid 2376121 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2376121 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2376121 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.898 21:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:56.441 00:35:56.441 real 0m34.777s 00:35:56.441 user 0m43.776s 00:35:56.441 sys 0m12.638s 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:56.441 ************************************ 00:35:56.441 END TEST nvmf_zcopy 00:35:56.441 ************************************ 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:56.441 ************************************ 00:35:56.441 START TEST nvmf_nmic 00:35:56.441 ************************************ 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:56.441 * Looking for test storage... 00:35:56.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:56.441 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:56.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.442 --rc genhtml_branch_coverage=1 00:35:56.442 --rc genhtml_function_coverage=1 00:35:56.442 --rc genhtml_legend=1 00:35:56.442 --rc geninfo_all_blocks=1 00:35:56.442 --rc geninfo_unexecuted_blocks=1 00:35:56.442 00:35:56.442 ' 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:56.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.442 --rc genhtml_branch_coverage=1 00:35:56.442 --rc genhtml_function_coverage=1 00:35:56.442 --rc genhtml_legend=1 00:35:56.442 --rc geninfo_all_blocks=1 00:35:56.442 --rc geninfo_unexecuted_blocks=1 00:35:56.442 00:35:56.442 ' 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:56.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.442 --rc genhtml_branch_coverage=1 00:35:56.442 --rc genhtml_function_coverage=1 00:35:56.442 --rc genhtml_legend=1 00:35:56.442 --rc geninfo_all_blocks=1 00:35:56.442 --rc geninfo_unexecuted_blocks=1 00:35:56.442 00:35:56.442 ' 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:56.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.442 --rc genhtml_branch_coverage=1 00:35:56.442 --rc genhtml_function_coverage=1 00:35:56.442 --rc genhtml_legend=1 00:35:56.442 --rc geninfo_all_blocks=1 00:35:56.442 --rc geninfo_unexecuted_blocks=1 00:35:56.442 00:35:56.442 ' 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:56.442 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:56.443 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:35:56.443 21:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:04.585 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:04.585 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:04.586 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:04.586 Found net devices under 0000:31:00.0: cvl_0_0 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:04.586 Found net devices under 0000:31:00.1: cvl_0_1 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:04.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:04.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:36:04.586 00:36:04.586 --- 10.0.0.2 ping statistics --- 00:36:04.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:04.586 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:04.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:04.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:36:04.586 00:36:04.586 --- 10.0.0.1 ping statistics --- 00:36:04.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:04.586 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2385393 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2385393 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2385393 ']' 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:04.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:04.586 21:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:04.586 [2024-12-05 21:29:05.996014] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:04.586 [2024-12-05 21:29:05.996990] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:36:04.586 [2024-12-05 21:29:05.997027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:04.846 [2024-12-05 21:29:06.081003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:04.846 [2024-12-05 21:29:06.118396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:04.846 [2024-12-05 21:29:06.118429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:04.846 [2024-12-05 21:29:06.118440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:04.846 [2024-12-05 21:29:06.118446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:04.846 [2024-12-05 21:29:06.118452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:04.846 [2024-12-05 21:29:06.120216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:04.846 [2024-12-05 21:29:06.120337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:04.846 [2024-12-05 21:29:06.120490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.846 [2024-12-05 21:29:06.120491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:04.846 [2024-12-05 21:29:06.176860] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:04.846 [2024-12-05 21:29:06.176885] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:04.846 [2024-12-05 21:29:06.177873] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:04.846 [2024-12-05 21:29:06.178496] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:04.846 [2024-12-05 21:29:06.178592] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:05.434 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:05.434 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:36:05.434 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:05.434 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:05.434 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:05.434 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:05.434 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:05.434 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.434 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:05.434 [2024-12-05 21:29:06.836952] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:05.696 Malloc0 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:05.696 [2024-12-05 21:29:06.909111] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:05.696 test case1: single bdev can't be used in multiple subsystems 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:05.696 [2024-12-05 21:29:06.944859] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:05.696 [2024-12-05 21:29:06.944882] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:05.696 [2024-12-05 21:29:06.944890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:05.696 request: 00:36:05.696 { 00:36:05.696 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:05.696 "namespace": { 00:36:05.696 "bdev_name": "Malloc0", 00:36:05.696 "no_auto_visible": false, 00:36:05.696 "hide_metadata": false 00:36:05.696 }, 00:36:05.696 "method": "nvmf_subsystem_add_ns", 00:36:05.696 "req_id": 1 00:36:05.696 } 00:36:05.696 Got JSON-RPC error response 00:36:05.696 response: 00:36:05.696 { 00:36:05.696 "code": -32602, 00:36:05.696 "message": "Invalid parameters" 00:36:05.696 } 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:05.696 Adding namespace failed - expected result. 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:05.696 test case2: host connect to nvmf target in multiple paths 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:05.696 [2024-12-05 21:29:06.956972] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.696 21:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:05.957 21:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:36:06.530 21:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:06.530 21:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:36:06.530 21:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:06.530 21:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:06.530 21:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:36:08.445 21:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:08.445 21:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:08.445 21:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:08.445 21:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:08.445 21:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:08.445 21:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:36:08.445 21:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:08.445 [global] 00:36:08.445 thread=1 00:36:08.445 invalidate=1 00:36:08.445 rw=write 00:36:08.445 time_based=1 00:36:08.445 runtime=1 00:36:08.445 ioengine=libaio 00:36:08.445 direct=1 00:36:08.445 bs=4096 00:36:08.445 iodepth=1 00:36:08.445 norandommap=0 00:36:08.445 numjobs=1 00:36:08.445 00:36:08.445 verify_dump=1 00:36:08.445 verify_backlog=512 00:36:08.445 verify_state_save=0 00:36:08.445 do_verify=1 00:36:08.445 verify=crc32c-intel 00:36:08.445 [job0] 00:36:08.445 filename=/dev/nvme0n1 00:36:08.445 Could not set queue depth (nvme0n1) 00:36:08.705 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:08.705 fio-3.35 00:36:08.705 Starting 1 thread 00:36:10.095 00:36:10.095 job0: (groupid=0, jobs=1): err= 0: pid=2386269: Thu Dec 5 21:29:11 2024 00:36:10.095 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:10.095 slat (nsec): min=7027, max=63233, avg=26346.58, stdev=2662.16 00:36:10.095 clat (usec): min=647, max=1567, avg=968.87, stdev=99.86 00:36:10.095 lat (usec): min=674, max=1593, avg=995.22, stdev=99.67 00:36:10.095 clat percentiles (usec): 00:36:10.095 | 1.00th=[ 676], 5.00th=[ 791], 10.00th=[ 865], 20.00th=[ 914], 00:36:10.095 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:36:10.095 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1090], 95.00th=[ 1106], 00:36:10.095 | 99.00th=[ 1237], 99.50th=[ 1319], 99.90th=[ 1565], 99.95th=[ 1565], 00:36:10.095 | 99.99th=[ 1565] 00:36:10.095 write: IOPS=758, BW=3033KiB/s (3106kB/s)(3036KiB/1001msec); 0 zone resets 00:36:10.095 slat (nsec): min=9134, max=68720, avg=30717.95, stdev=9331.39 00:36:10.095 clat (usec): min=261, max=1208, avg=602.81, stdev=111.17 00:36:10.095 lat (usec): min=270, max=1241, avg=633.53, stdev=115.54 00:36:10.095 clat percentiles (usec): 00:36:10.095 | 1.00th=[ 355], 5.00th=[ 408], 10.00th=[ 453], 20.00th=[ 502], 00:36:10.095 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 644], 00:36:10.095 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 734], 95.00th=[ 758], 00:36:10.095 | 99.00th=[ 832], 99.50th=[ 922], 99.90th=[ 1205], 99.95th=[ 1205], 00:36:10.095 | 99.99th=[ 1205] 00:36:10.095 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:36:10.095 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:10.095 lat (usec) : 500=11.25%, 750=45.87%, 1000=29.27% 00:36:10.095 lat (msec) : 2=13.61% 00:36:10.095 cpu : usr=3.40%, sys=4.10%, ctx=1271, majf=0, minf=1 00:36:10.095 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.095 issued rwts: total=512,759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.095 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:10.095 00:36:10.095 Run status group 0 (all jobs): 00:36:10.095 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:36:10.095 WRITE: bw=3033KiB/s (3106kB/s), 3033KiB/s-3033KiB/s (3106kB/s-3106kB/s), io=3036KiB (3109kB), run=1001-1001msec 00:36:10.095 00:36:10.095 Disk stats (read/write): 00:36:10.095 nvme0n1: ios=562/600, merge=0/0, ticks=548/278, in_queue=826, util=93.49% 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:10.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:10.095 rmmod nvme_tcp 00:36:10.095 rmmod nvme_fabrics 00:36:10.095 rmmod nvme_keyring 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2385393 ']' 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2385393 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2385393 ']' 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2385393 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:10.095 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2385393 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2385393' 00:36:10.357 killing process with pid 2385393 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2385393 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2385393 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:10.357 21:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.906 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:12.906 00:36:12.906 real 0m16.304s 00:36:12.906 user 0m35.228s 00:36:12.906 sys 0m7.907s 00:36:12.906 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:12.906 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:12.906 ************************************ 00:36:12.906 END TEST nvmf_nmic 00:36:12.906 ************************************ 00:36:12.906 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:12.906 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:12.906 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:12.906 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:12.906 ************************************ 00:36:12.906 START TEST nvmf_fio_target 00:36:12.906 ************************************ 00:36:12.906 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:12.906 * Looking for test storage... 00:36:12.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:12.906 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:12.906 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:36:12.906 21:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:12.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.906 --rc genhtml_branch_coverage=1 00:36:12.906 --rc genhtml_function_coverage=1 00:36:12.906 --rc genhtml_legend=1 00:36:12.906 --rc geninfo_all_blocks=1 00:36:12.906 --rc geninfo_unexecuted_blocks=1 00:36:12.906 00:36:12.906 ' 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:12.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.906 --rc genhtml_branch_coverage=1 00:36:12.906 --rc genhtml_function_coverage=1 00:36:12.906 --rc genhtml_legend=1 00:36:12.906 --rc geninfo_all_blocks=1 00:36:12.906 --rc geninfo_unexecuted_blocks=1 00:36:12.906 00:36:12.906 ' 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:12.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.906 --rc genhtml_branch_coverage=1 00:36:12.906 --rc genhtml_function_coverage=1 00:36:12.906 --rc genhtml_legend=1 00:36:12.906 --rc geninfo_all_blocks=1 00:36:12.906 --rc geninfo_unexecuted_blocks=1 00:36:12.906 00:36:12.906 ' 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:12.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.906 --rc genhtml_branch_coverage=1 00:36:12.906 --rc genhtml_function_coverage=1 00:36:12.906 --rc genhtml_legend=1 00:36:12.906 --rc geninfo_all_blocks=1 00:36:12.906 --rc geninfo_unexecuted_blocks=1 00:36:12.906 00:36:12.906 ' 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:12.906 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:12.907 21:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:21.043 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:21.043 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:21.043 Found net devices under 0000:31:00.0: cvl_0_0 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.043 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:21.044 Found net devices under 0000:31:00.1: cvl_0_1 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:21.044 21:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:21.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:21.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:36:21.044 00:36:21.044 --- 10.0.0.2 ping statistics --- 00:36:21.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.044 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:21.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:21.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:36:21.044 00:36:21.044 --- 10.0.0.1 ping statistics --- 00:36:21.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.044 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2391290 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2391290 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2391290 ']' 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:21.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:21.044 21:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:21.044 [2024-12-05 21:29:22.379015] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:21.044 [2024-12-05 21:29:22.380065] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:36:21.044 [2024-12-05 21:29:22.380107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:21.044 [2024-12-05 21:29:22.469567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:21.307 [2024-12-05 21:29:22.507792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:21.307 [2024-12-05 21:29:22.507828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:21.307 [2024-12-05 21:29:22.507836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:21.307 [2024-12-05 21:29:22.507843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:21.307 [2024-12-05 21:29:22.507849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:21.307 [2024-12-05 21:29:22.509409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:21.307 [2024-12-05 21:29:22.509528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:21.307 [2024-12-05 21:29:22.509684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:21.307 [2024-12-05 21:29:22.509685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:21.307 [2024-12-05 21:29:22.566387] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:21.307 [2024-12-05 21:29:22.566474] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:21.307 [2024-12-05 21:29:22.567388] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:21.307 [2024-12-05 21:29:22.567521] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:21.307 [2024-12-05 21:29:22.567709] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:21.881 21:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:21.881 21:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:36:21.881 21:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:21.881 21:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:21.881 21:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:21.881 21:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:21.881 21:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:22.143 [2024-12-05 21:29:23.406198] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:22.143 21:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:22.405 21:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:36:22.405 21:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:22.405 21:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:36:22.405 21:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:22.666 21:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:36:22.666 21:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:22.927 21:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:36:22.927 21:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:36:23.188 21:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:23.188 21:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:36:23.188 21:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:23.448 21:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:36:23.449 21:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:23.709 21:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:36:23.709 21:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:36:23.709 21:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:23.969 21:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:23.969 21:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:24.229 21:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:24.229 21:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:24.229 21:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:24.488 [2024-12-05 21:29:25.778312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:24.488 21:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:36:24.748 21:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:36:24.748 21:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:25.317 21:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:36:25.317 21:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:36:25.317 21:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:25.317 21:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:36:25.317 21:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:36:25.317 21:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:36:27.226 21:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:27.226 21:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:27.226 21:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:27.226 21:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:36:27.226 21:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:27.226 21:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:36:27.226 21:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:27.226 [global] 00:36:27.226 thread=1 00:36:27.226 invalidate=1 00:36:27.226 rw=write 00:36:27.226 time_based=1 00:36:27.226 runtime=1 00:36:27.226 ioengine=libaio 00:36:27.226 direct=1 00:36:27.226 bs=4096 00:36:27.226 iodepth=1 00:36:27.226 norandommap=0 00:36:27.226 numjobs=1 00:36:27.226 00:36:27.226 verify_dump=1 00:36:27.226 verify_backlog=512 00:36:27.226 verify_state_save=0 00:36:27.226 do_verify=1 00:36:27.226 verify=crc32c-intel 00:36:27.226 [job0] 00:36:27.226 filename=/dev/nvme0n1 00:36:27.226 [job1] 00:36:27.226 filename=/dev/nvme0n2 00:36:27.226 [job2] 00:36:27.226 filename=/dev/nvme0n3 00:36:27.226 [job3] 00:36:27.226 filename=/dev/nvme0n4 00:36:27.510 Could not set queue depth (nvme0n1) 00:36:27.510 Could not set queue depth (nvme0n2) 00:36:27.510 Could not set queue depth (nvme0n3) 00:36:27.510 Could not set queue depth (nvme0n4) 00:36:27.777 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:27.777 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:27.777 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:27.777 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:27.777 fio-3.35 00:36:27.777 Starting 4 threads 00:36:29.207 00:36:29.207 job0: (groupid=0, jobs=1): err= 0: pid=2392866: Thu Dec 5 21:29:30 2024 00:36:29.207 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:29.207 slat (nsec): min=25605, max=45200, avg=26912.51, stdev=2873.87 00:36:29.207 clat (usec): min=730, max=1523, avg=1131.17, stdev=125.28 00:36:29.207 lat (usec): min=757, max=1550, avg=1158.08, stdev=125.47 00:36:29.207 clat percentiles (usec): 00:36:29.207 | 1.00th=[ 775], 5.00th=[ 930], 10.00th=[ 963], 20.00th=[ 1037], 00:36:29.207 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1156], 60.00th=[ 1172], 00:36:29.207 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1270], 95.00th=[ 1303], 00:36:29.207 | 99.00th=[ 1369], 99.50th=[ 1418], 99.90th=[ 1516], 99.95th=[ 1516], 00:36:29.207 | 99.99th=[ 1516] 00:36:29.207 write: IOPS=565, BW=2262KiB/s (2316kB/s)(2264KiB/1001msec); 0 zone resets 00:36:29.207 slat (nsec): min=10013, max=55851, avg=31939.43, stdev=8928.54 00:36:29.207 clat (usec): min=264, max=1124, avg=664.91, stdev=146.02 00:36:29.207 lat (usec): min=298, max=1159, avg=696.85, stdev=148.32 00:36:29.207 clat percentiles (usec): 00:36:29.207 | 1.00th=[ 338], 5.00th=[ 400], 10.00th=[ 474], 20.00th=[ 545], 00:36:29.207 | 30.00th=[ 594], 40.00th=[ 635], 50.00th=[ 660], 60.00th=[ 701], 00:36:29.207 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 848], 95.00th=[ 906], 00:36:29.207 | 99.00th=[ 1004], 99.50th=[ 1020], 99.90th=[ 1123], 99.95th=[ 1123], 00:36:29.207 | 99.99th=[ 1123] 00:36:29.207 bw ( KiB/s): min= 4096, max= 4096, per=35.10%, avg=4096.00, stdev= 0.00, samples=1 00:36:29.207 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:29.207 lat (usec) : 500=7.42%, 750=30.24%, 1000=21.15% 00:36:29.207 lat (msec) : 2=41.19% 00:36:29.207 cpu : usr=2.00%, sys=2.90%, ctx=1079, majf=0, minf=1 00:36:29.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.207 issued rwts: total=512,566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:29.207 job1: (groupid=0, jobs=1): err= 0: pid=2392871: Thu Dec 5 21:29:30 2024 00:36:29.207 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:29.207 slat (nsec): min=7429, max=46163, avg=27794.15, stdev=3092.75 00:36:29.207 clat (usec): min=482, max=1600, avg=1003.47, stdev=118.21 00:36:29.207 lat (usec): min=510, max=1628, avg=1031.26, stdev=118.21 00:36:29.207 clat percentiles (usec): 00:36:29.207 | 1.00th=[ 635], 5.00th=[ 807], 10.00th=[ 873], 20.00th=[ 930], 00:36:29.207 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1029], 00:36:29.207 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1172], 00:36:29.207 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1598], 99.95th=[ 1598], 00:36:29.207 | 99.99th=[ 1598] 00:36:29.207 write: IOPS=732, BW=2929KiB/s (2999kB/s)(2932KiB/1001msec); 0 zone resets 00:36:29.207 slat (nsec): min=9730, max=71648, avg=30706.63, stdev=10893.55 00:36:29.207 clat (usec): min=225, max=928, avg=595.25, stdev=120.46 00:36:29.207 lat (usec): min=260, max=964, avg=625.96, stdev=124.72 00:36:29.207 clat percentiles (usec): 00:36:29.207 | 1.00th=[ 277], 5.00th=[ 379], 10.00th=[ 424], 20.00th=[ 498], 00:36:29.207 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:36:29.207 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 783], 00:36:29.207 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 930], 99.95th=[ 930], 00:36:29.207 | 99.99th=[ 930] 00:36:29.207 bw ( KiB/s): min= 4096, max= 4096, per=35.10%, avg=4096.00, stdev= 0.00, samples=1 00:36:29.207 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:29.207 lat (usec) : 250=0.08%, 500=12.21%, 750=42.73%, 1000=22.41% 00:36:29.207 lat (msec) : 2=22.57% 00:36:29.207 cpu : usr=2.10%, sys=5.20%, ctx=1247, majf=0, minf=1 00:36:29.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.207 issued rwts: total=512,733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:29.207 job2: (groupid=0, jobs=1): err= 0: pid=2392872: Thu Dec 5 21:29:30 2024 00:36:29.207 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:29.207 slat (nsec): min=7111, max=44253, avg=25543.26, stdev=4113.99 00:36:29.207 clat (usec): min=345, max=1227, avg=942.03, stdev=118.94 00:36:29.207 lat (usec): min=370, max=1252, avg=967.57, stdev=119.54 00:36:29.207 clat percentiles (usec): 00:36:29.207 | 1.00th=[ 553], 5.00th=[ 717], 10.00th=[ 799], 20.00th=[ 857], 00:36:29.207 | 30.00th=[ 914], 40.00th=[ 947], 50.00th=[ 971], 60.00th=[ 996], 00:36:29.207 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1090], 00:36:29.207 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 1221], 99.95th=[ 1221], 00:36:29.207 | 99.99th=[ 1221] 00:36:29.207 write: IOPS=993, BW=3972KiB/s (4067kB/s)(3976KiB/1001msec); 0 zone resets 00:36:29.207 slat (nsec): min=5603, max=52028, avg=24730.89, stdev=11219.49 00:36:29.207 clat (usec): min=133, max=1234, avg=473.63, stdev=157.28 00:36:29.207 lat (usec): min=143, max=1255, avg=498.36, stdev=162.27 00:36:29.207 clat percentiles (usec): 00:36:29.207 | 1.00th=[ 161], 5.00th=[ 249], 10.00th=[ 277], 20.00th=[ 318], 00:36:29.207 | 30.00th=[ 383], 40.00th=[ 424], 50.00th=[ 474], 60.00th=[ 510], 00:36:29.207 | 70.00th=[ 553], 80.00th=[ 611], 90.00th=[ 668], 95.00th=[ 734], 00:36:29.207 | 99.00th=[ 914], 99.50th=[ 979], 99.90th=[ 1237], 99.95th=[ 1237], 00:36:29.207 | 99.99th=[ 1237] 00:36:29.207 bw ( KiB/s): min= 4096, max= 4096, per=35.10%, avg=4096.00, stdev= 0.00, samples=1 00:36:29.207 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:29.207 lat (usec) : 250=3.39%, 500=34.46%, 750=27.89%, 1000=21.98% 00:36:29.207 lat (msec) : 2=12.28% 00:36:29.207 cpu : usr=2.40%, sys=3.40%, ctx=1506, majf=0, minf=2 00:36:29.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.207 issued rwts: total=512,994,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:29.207 job3: (groupid=0, jobs=1): err= 0: pid=2392873: Thu Dec 5 21:29:30 2024 00:36:29.207 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:29.208 slat (nsec): min=8169, max=57680, avg=26447.75, stdev=4047.23 00:36:29.208 clat (usec): min=259, max=1567, avg=1085.32, stdev=159.40 00:36:29.208 lat (usec): min=274, max=1593, avg=1111.77, stdev=160.38 00:36:29.208 clat percentiles (usec): 00:36:29.208 | 1.00th=[ 523], 5.00th=[ 750], 10.00th=[ 881], 20.00th=[ 1012], 00:36:29.208 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1156], 00:36:29.208 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1270], 00:36:29.208 | 99.00th=[ 1352], 99.50th=[ 1385], 99.90th=[ 1565], 99.95th=[ 1565], 00:36:29.208 | 99.99th=[ 1565] 00:36:29.208 write: IOPS=626, BW=2505KiB/s (2566kB/s)(2508KiB/1001msec); 0 zone resets 00:36:29.208 slat (nsec): min=10301, max=55603, avg=31408.84, stdev=10017.86 00:36:29.208 clat (usec): min=171, max=982, avg=635.63, stdev=129.00 00:36:29.208 lat (usec): min=181, max=1018, avg=667.04, stdev=133.21 00:36:29.208 clat percentiles (usec): 00:36:29.208 | 1.00th=[ 277], 5.00th=[ 400], 10.00th=[ 465], 20.00th=[ 529], 00:36:29.208 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 685], 00:36:29.208 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 791], 95.00th=[ 824], 00:36:29.208 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 979], 99.95th=[ 979], 00:36:29.208 | 99.99th=[ 979] 00:36:29.208 bw ( KiB/s): min= 4096, max= 4096, per=35.10%, avg=4096.00, stdev= 0.00, samples=1 00:36:29.208 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:29.208 lat (usec) : 250=0.26%, 500=8.34%, 750=38.45%, 1000=16.07% 00:36:29.208 lat (msec) : 2=36.87% 00:36:29.208 cpu : usr=1.90%, sys=3.20%, ctx=1140, majf=0, minf=1 00:36:29.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.208 issued rwts: total=512,627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:29.208 00:36:29.208 Run status group 0 (all jobs): 00:36:29.208 READ: bw=8184KiB/s (8380kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:36:29.208 WRITE: bw=11.4MiB/s (11.9MB/s), 2262KiB/s-3972KiB/s (2316kB/s-4067kB/s), io=11.4MiB (12.0MB), run=1001-1001msec 00:36:29.208 00:36:29.208 Disk stats (read/write): 00:36:29.208 nvme0n1: ios=453/512, merge=0/0, ticks=1441/308, in_queue=1749, util=96.19% 00:36:29.208 nvme0n2: ios=512/512, merge=0/0, ticks=1404/243, in_queue=1647, util=96.22% 00:36:29.208 nvme0n3: ios=512/650, merge=0/0, ticks=468/288, in_queue=756, util=88.35% 00:36:29.208 nvme0n4: ios=438/512, merge=0/0, ticks=1373/318, in_queue=1691, util=96.03% 00:36:29.208 21:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:36:29.208 [global] 00:36:29.208 thread=1 00:36:29.208 invalidate=1 00:36:29.208 rw=randwrite 00:36:29.208 time_based=1 00:36:29.208 runtime=1 00:36:29.208 ioengine=libaio 00:36:29.208 direct=1 00:36:29.208 bs=4096 00:36:29.208 iodepth=1 00:36:29.208 norandommap=0 00:36:29.208 numjobs=1 00:36:29.208 00:36:29.208 verify_dump=1 00:36:29.208 verify_backlog=512 00:36:29.208 verify_state_save=0 00:36:29.208 do_verify=1 00:36:29.208 verify=crc32c-intel 00:36:29.208 [job0] 00:36:29.208 filename=/dev/nvme0n1 00:36:29.208 [job1] 00:36:29.208 filename=/dev/nvme0n2 00:36:29.208 [job2] 00:36:29.208 filename=/dev/nvme0n3 00:36:29.208 [job3] 00:36:29.208 filename=/dev/nvme0n4 00:36:29.208 Could not set queue depth (nvme0n1) 00:36:29.208 Could not set queue depth (nvme0n2) 00:36:29.208 Could not set queue depth (nvme0n3) 00:36:29.208 Could not set queue depth (nvme0n4) 00:36:29.531 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:29.531 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:29.531 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:29.531 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:29.531 fio-3.35 00:36:29.531 Starting 4 threads 00:36:30.502 00:36:30.502 job0: (groupid=0, jobs=1): err= 0: pid=2393316: Thu Dec 5 21:29:31 2024 00:36:30.502 read: IOPS=83, BW=334KiB/s (342kB/s)(340KiB/1018msec) 00:36:30.502 slat (nsec): min=25655, max=58660, avg=27696.65, stdev=5158.07 00:36:30.502 clat (usec): min=635, max=42063, avg=7368.45, stdev=14784.54 00:36:30.502 lat (usec): min=662, max=42089, avg=7396.14, stdev=14783.76 00:36:30.502 clat percentiles (usec): 00:36:30.502 | 1.00th=[ 635], 5.00th=[ 996], 10.00th=[ 1020], 20.00th=[ 1057], 00:36:30.502 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1172], 00:36:30.502 | 70.00th=[ 1205], 80.00th=[ 1254], 90.00th=[41681], 95.00th=[42206], 00:36:30.502 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:30.502 | 99.99th=[42206] 00:36:30.502 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:36:30.502 slat (usec): min=9, max=28709, avg=87.03, stdev=1267.44 00:36:30.502 clat (usec): min=189, max=1025, avg=659.98, stdev=126.97 00:36:30.502 lat (usec): min=200, max=29469, avg=747.01, stdev=1278.43 00:36:30.502 clat percentiles (usec): 00:36:30.502 | 1.00th=[ 343], 5.00th=[ 453], 10.00th=[ 490], 20.00th=[ 553], 00:36:30.502 | 30.00th=[ 603], 40.00th=[ 644], 50.00th=[ 668], 60.00th=[ 701], 00:36:30.502 | 70.00th=[ 725], 80.00th=[ 766], 90.00th=[ 807], 95.00th=[ 865], 00:36:30.502 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 1029], 99.95th=[ 1029], 00:36:30.502 | 99.99th=[ 1029] 00:36:30.502 bw ( KiB/s): min= 4096, max= 4096, per=47.10%, avg=4096.00, stdev= 0.00, samples=1 00:36:30.502 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:30.502 lat (usec) : 250=0.17%, 500=9.88%, 750=55.28%, 1000=21.11% 00:36:30.502 lat (msec) : 2=11.39%, 50=2.18% 00:36:30.502 cpu : usr=0.98%, sys=1.67%, ctx=600, majf=0, minf=1 00:36:30.502 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.502 issued rwts: total=85,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.502 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:30.502 job1: (groupid=0, jobs=1): err= 0: pid=2393328: Thu Dec 5 21:29:31 2024 00:36:30.502 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:30.502 slat (nsec): min=8661, max=52132, avg=28308.53, stdev=3525.10 00:36:30.502 clat (usec): min=673, max=2391, avg=1093.93, stdev=141.93 00:36:30.502 lat (usec): min=682, max=2418, avg=1122.24, stdev=142.09 00:36:30.502 clat percentiles (usec): 00:36:30.502 | 1.00th=[ 717], 5.00th=[ 857], 10.00th=[ 930], 20.00th=[ 988], 00:36:30.502 | 30.00th=[ 1029], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1139], 00:36:30.502 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1237], 95.00th=[ 1270], 00:36:30.502 | 99.00th=[ 1336], 99.50th=[ 1352], 99.90th=[ 2376], 99.95th=[ 2376], 00:36:30.502 | 99.99th=[ 2376] 00:36:30.502 write: IOPS=603, BW=2414KiB/s (2472kB/s)(2416KiB/1001msec); 0 zone resets 00:36:30.502 slat (nsec): min=9292, max=60553, avg=32210.11, stdev=10048.75 00:36:30.502 clat (usec): min=207, max=1033, avg=654.48, stdev=164.87 00:36:30.502 lat (usec): min=240, max=1067, avg=686.69, stdev=167.36 00:36:30.502 clat percentiles (usec): 00:36:30.502 | 1.00th=[ 251], 5.00th=[ 367], 10.00th=[ 424], 20.00th=[ 502], 00:36:30.502 | 30.00th=[ 578], 40.00th=[ 635], 50.00th=[ 668], 60.00th=[ 709], 00:36:30.502 | 70.00th=[ 750], 80.00th=[ 799], 90.00th=[ 857], 95.00th=[ 906], 00:36:30.502 | 99.00th=[ 988], 99.50th=[ 996], 99.90th=[ 1037], 99.95th=[ 1037], 00:36:30.502 | 99.99th=[ 1037] 00:36:30.502 bw ( KiB/s): min= 4096, max= 4096, per=47.10%, avg=4096.00, stdev= 0.00, samples=1 00:36:30.502 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:30.502 lat (usec) : 250=0.54%, 500=9.77%, 750=28.14%, 1000=26.16% 00:36:30.502 lat (msec) : 2=35.30%, 4=0.09% 00:36:30.502 cpu : usr=1.60%, sys=4.20%, ctx=1118, majf=0, minf=1 00:36:30.502 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.502 issued rwts: total=512,604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.502 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:30.502 job2: (groupid=0, jobs=1): err= 0: pid=2393345: Thu Dec 5 21:29:31 2024 00:36:30.502 read: IOPS=45, BW=182KiB/s (186kB/s)(188KiB/1033msec) 00:36:30.502 slat (nsec): min=10317, max=28856, avg=25705.55, stdev=2342.83 00:36:30.502 clat (usec): min=641, max=42071, avg=13286.73, stdev=18877.23 00:36:30.502 lat (usec): min=668, max=42097, avg=13312.43, stdev=18877.45 00:36:30.502 clat percentiles (usec): 00:36:30.502 | 1.00th=[ 644], 5.00th=[ 914], 10.00th=[ 996], 20.00th=[ 1057], 00:36:30.502 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1188], 60.00th=[ 1270], 00:36:30.502 | 70.00th=[ 1352], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:36:30.502 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:30.502 | 99.99th=[42206] 00:36:30.502 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:36:30.502 slat (nsec): min=9453, max=52170, avg=30643.39, stdev=7739.10 00:36:30.502 clat (usec): min=227, max=1184, avg=755.90, stdev=148.29 00:36:30.502 lat (usec): min=259, max=1216, avg=786.55, stdev=150.98 00:36:30.502 clat percentiles (usec): 00:36:30.502 | 1.00th=[ 355], 5.00th=[ 502], 10.00th=[ 578], 20.00th=[ 635], 00:36:30.502 | 30.00th=[ 693], 40.00th=[ 725], 50.00th=[ 766], 60.00th=[ 791], 00:36:30.502 | 70.00th=[ 824], 80.00th=[ 873], 90.00th=[ 947], 95.00th=[ 996], 00:36:30.502 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1188], 99.95th=[ 1188], 00:36:30.502 | 99.99th=[ 1188] 00:36:30.502 bw ( KiB/s): min= 4096, max= 4096, per=47.10%, avg=4096.00, stdev= 0.00, samples=1 00:36:30.502 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:30.502 lat (usec) : 250=0.18%, 500=4.29%, 750=38.28%, 1000=46.33% 00:36:30.502 lat (msec) : 2=8.41%, 50=2.50% 00:36:30.502 cpu : usr=1.36%, sys=1.07%, ctx=559, majf=0, minf=2 00:36:30.502 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.502 issued rwts: total=47,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.502 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:30.502 job3: (groupid=0, jobs=1): err= 0: pid=2393351: Thu Dec 5 21:29:31 2024 00:36:30.502 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:30.503 slat (nsec): min=8535, max=59879, avg=26555.53, stdev=2908.82 00:36:30.503 clat (usec): min=798, max=1431, avg=1087.38, stdev=85.74 00:36:30.503 lat (usec): min=824, max=1457, avg=1113.94, stdev=85.66 00:36:30.503 clat percentiles (usec): 00:36:30.503 | 1.00th=[ 848], 5.00th=[ 938], 10.00th=[ 979], 20.00th=[ 1020], 00:36:30.503 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:36:30.503 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1221], 00:36:30.503 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1434], 99.95th=[ 1434], 00:36:30.503 | 99.99th=[ 1434] 00:36:30.503 write: IOPS=617, BW=2470KiB/s (2529kB/s)(2472KiB/1001msec); 0 zone resets 00:36:30.503 slat (nsec): min=4852, max=56123, avg=27174.04, stdev=10095.06 00:36:30.503 clat (usec): min=230, max=1162, avg=650.27, stdev=142.96 00:36:30.503 lat (usec): min=264, max=1177, avg=677.44, stdev=145.36 00:36:30.503 clat percentiles (usec): 00:36:30.503 | 1.00th=[ 351], 5.00th=[ 404], 10.00th=[ 469], 20.00th=[ 515], 00:36:30.503 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 693], 00:36:30.503 | 70.00th=[ 725], 80.00th=[ 766], 90.00th=[ 832], 95.00th=[ 889], 00:36:30.503 | 99.00th=[ 996], 99.50th=[ 1074], 99.90th=[ 1156], 99.95th=[ 1156], 00:36:30.503 | 99.99th=[ 1156] 00:36:30.503 bw ( KiB/s): min= 4096, max= 4096, per=47.10%, avg=4096.00, stdev= 0.00, samples=1 00:36:30.503 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:30.503 lat (usec) : 250=0.09%, 500=9.47%, 750=32.04%, 1000=18.85% 00:36:30.503 lat (msec) : 2=39.56% 00:36:30.503 cpu : usr=1.50%, sys=3.30%, ctx=1132, majf=0, minf=1 00:36:30.503 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.503 issued rwts: total=512,618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.503 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:30.503 00:36:30.503 Run status group 0 (all jobs): 00:36:30.503 READ: bw=4476KiB/s (4584kB/s), 182KiB/s-2046KiB/s (186kB/s-2095kB/s), io=4624KiB (4735kB), run=1001-1033msec 00:36:30.503 WRITE: bw=8697KiB/s (8906kB/s), 1983KiB/s-2470KiB/s (2030kB/s-2529kB/s), io=8984KiB (9200kB), run=1001-1033msec 00:36:30.503 00:36:30.503 Disk stats (read/write): 00:36:30.503 nvme0n1: ios=123/512, merge=0/0, ticks=853/326, in_queue=1179, util=89.88% 00:36:30.503 nvme0n2: ios=477/512, merge=0/0, ticks=841/324, in_queue=1165, util=92.25% 00:36:30.503 nvme0n3: ios=99/512, merge=0/0, ticks=782/374, in_queue=1156, util=100.00% 00:36:30.503 nvme0n4: ios=474/512, merge=0/0, ticks=619/321, in_queue=940, util=97.33% 00:36:30.503 21:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:36:30.761 [global] 00:36:30.761 thread=1 00:36:30.761 invalidate=1 00:36:30.761 rw=write 00:36:30.761 time_based=1 00:36:30.761 runtime=1 00:36:30.761 ioengine=libaio 00:36:30.761 direct=1 00:36:30.761 bs=4096 00:36:30.761 iodepth=128 00:36:30.761 norandommap=0 00:36:30.761 numjobs=1 00:36:30.761 00:36:30.761 verify_dump=1 00:36:30.761 verify_backlog=512 00:36:30.761 verify_state_save=0 00:36:30.761 do_verify=1 00:36:30.761 verify=crc32c-intel 00:36:30.761 [job0] 00:36:30.761 filename=/dev/nvme0n1 00:36:30.761 [job1] 00:36:30.761 filename=/dev/nvme0n2 00:36:30.761 [job2] 00:36:30.761 filename=/dev/nvme0n3 00:36:30.761 [job3] 00:36:30.761 filename=/dev/nvme0n4 00:36:30.761 Could not set queue depth (nvme0n1) 00:36:30.761 Could not set queue depth (nvme0n2) 00:36:30.761 Could not set queue depth (nvme0n3) 00:36:30.761 Could not set queue depth (nvme0n4) 00:36:31.018 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:31.018 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:31.018 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:31.018 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:31.018 fio-3.35 00:36:31.018 Starting 4 threads 00:36:32.392 00:36:32.392 job0: (groupid=0, jobs=1): err= 0: pid=2393771: Thu Dec 5 21:29:33 2024 00:36:32.392 read: IOPS=4796, BW=18.7MiB/s (19.6MB/s)(19.5MiB/1043msec) 00:36:32.392 slat (nsec): min=890, max=15573k, avg=85448.04, stdev=668021.50 00:36:32.392 clat (usec): min=3788, max=50503, avg=12566.41, stdev=7802.77 00:36:32.392 lat (usec): min=3797, max=53514, avg=12651.85, stdev=7841.87 00:36:32.392 clat percentiles (usec): 00:36:32.392 | 1.00th=[ 5735], 5.00th=[ 6849], 10.00th=[ 7177], 20.00th=[ 7635], 00:36:32.392 | 30.00th=[ 8094], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[10290], 00:36:32.392 | 70.00th=[14353], 80.00th=[17695], 90.00th=[20579], 95.00th=[25560], 00:36:32.392 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 00:36:32.392 | 99.99th=[50594] 00:36:32.392 write: IOPS=4908, BW=19.2MiB/s (20.1MB/s)(20.0MiB/1043msec); 0 zone resets 00:36:32.392 slat (nsec): min=1564, max=16462k, avg=106011.47, stdev=811557.00 00:36:32.392 clat (usec): min=3187, max=36132, avg=13567.04, stdev=7465.71 00:36:32.392 lat (usec): min=3194, max=36139, avg=13673.05, stdev=7538.98 00:36:32.392 clat percentiles (usec): 00:36:32.392 | 1.00th=[ 5014], 5.00th=[ 6063], 10.00th=[ 6718], 20.00th=[ 7111], 00:36:32.392 | 30.00th=[ 7373], 40.00th=[ 7832], 50.00th=[11731], 60.00th=[15008], 00:36:32.392 | 70.00th=[16909], 80.00th=[18220], 90.00th=[24773], 95.00th=[27657], 00:36:32.392 | 99.00th=[35390], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:36:32.392 | 99.99th=[35914] 00:36:32.392 bw ( KiB/s): min=20480, max=20480, per=24.88%, avg=20480.00, stdev= 0.00, samples=2 00:36:32.393 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:36:32.393 lat (msec) : 4=0.28%, 10=50.11%, 20=35.42%, 50=13.66%, 100=0.52% 00:36:32.393 cpu : usr=4.03%, sys=5.28%, ctx=328, majf=0, minf=1 00:36:32.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:36:32.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:32.393 issued rwts: total=5003,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:32.393 job1: (groupid=0, jobs=1): err= 0: pid=2393781: Thu Dec 5 21:29:33 2024 00:36:32.393 read: IOPS=4858, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1004msec) 00:36:32.393 slat (nsec): min=891, max=16227k, avg=99992.23, stdev=858198.67 00:36:32.393 clat (usec): min=3114, max=42446, avg=13788.78, stdev=6211.32 00:36:32.393 lat (usec): min=3118, max=42452, avg=13888.77, stdev=6261.93 00:36:32.393 clat percentiles (usec): 00:36:32.393 | 1.00th=[ 4555], 5.00th=[ 6259], 10.00th=[ 7046], 20.00th=[ 7963], 00:36:32.393 | 30.00th=[ 9503], 40.00th=[11863], 50.00th=[13173], 60.00th=[14615], 00:36:32.393 | 70.00th=[15795], 80.00th=[17433], 90.00th=[20579], 95.00th=[24511], 00:36:32.393 | 99.00th=[35390], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:36:32.393 | 99.99th=[42206] 00:36:32.393 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:36:32.393 slat (nsec): min=1592, max=14178k, avg=84796.09, stdev=668390.80 00:36:32.393 clat (usec): min=1206, max=59716, avg=11740.22, stdev=8528.32 00:36:32.393 lat (usec): min=1216, max=59721, avg=11825.02, stdev=8585.24 00:36:32.393 clat percentiles (usec): 00:36:32.393 | 1.00th=[ 3949], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6390], 00:36:32.393 | 30.00th=[ 6980], 40.00th=[ 8225], 50.00th=[ 8979], 60.00th=[10159], 00:36:32.393 | 70.00th=[11731], 80.00th=[14353], 90.00th=[22152], 95.00th=[27919], 00:36:32.393 | 99.00th=[53740], 99.50th=[58459], 99.90th=[59507], 99.95th=[59507], 00:36:32.393 | 99.99th=[59507] 00:36:32.393 bw ( KiB/s): min=16384, max=24576, per=24.88%, avg=20480.00, stdev=5792.62, samples=2 00:36:32.393 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:36:32.393 lat (msec) : 2=0.02%, 4=1.06%, 10=43.64%, 20=43.24%, 50=11.42% 00:36:32.393 lat (msec) : 100=0.62% 00:36:32.393 cpu : usr=4.29%, sys=4.99%, ctx=318, majf=0, minf=1 00:36:32.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:36:32.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:32.393 issued rwts: total=4878,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:32.393 job2: (groupid=0, jobs=1): err= 0: pid=2393795: Thu Dec 5 21:29:33 2024 00:36:32.393 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:36:32.393 slat (nsec): min=986, max=11617k, avg=92681.15, stdev=686499.29 00:36:32.393 clat (usec): min=3239, max=31524, avg=11356.59, stdev=4905.90 00:36:32.393 lat (usec): min=3244, max=31551, avg=11449.27, stdev=4955.96 00:36:32.393 clat percentiles (usec): 00:36:32.393 | 1.00th=[ 5014], 5.00th=[ 6652], 10.00th=[ 6980], 20.00th=[ 7635], 00:36:32.393 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10028], 00:36:32.393 | 70.00th=[12518], 80.00th=[14615], 90.00th=[20579], 95.00th=[21627], 00:36:32.393 | 99.00th=[25297], 99.50th=[27657], 99.90th=[28443], 99.95th=[28443], 00:36:32.393 | 99.99th=[31589] 00:36:32.393 write: IOPS=4807, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1008msec); 0 zone resets 00:36:32.393 slat (nsec): min=1668, max=14495k, avg=112632.21, stdev=629447.98 00:36:32.393 clat (usec): min=1957, max=59794, avg=15582.77, stdev=8869.91 00:36:32.393 lat (usec): min=1965, max=59799, avg=15695.40, stdev=8931.08 00:36:32.393 clat percentiles (usec): 00:36:32.393 | 1.00th=[ 4047], 5.00th=[ 5145], 10.00th=[ 6718], 20.00th=[ 8586], 00:36:32.393 | 30.00th=[10159], 40.00th=[11994], 50.00th=[12518], 60.00th=[15926], 00:36:32.393 | 70.00th=[18482], 80.00th=[22414], 90.00th=[26084], 95.00th=[28705], 00:36:32.393 | 99.00th=[53740], 99.50th=[58459], 99.90th=[60031], 99.95th=[60031], 00:36:32.393 | 99.99th=[60031] 00:36:32.393 bw ( KiB/s): min=16872, max=20880, per=22.94%, avg=18876.00, stdev=2834.08, samples=2 00:36:32.393 iops : min= 4218, max= 5220, avg=4719.00, stdev=708.52, samples=2 00:36:32.393 lat (msec) : 2=0.06%, 4=0.63%, 10=44.01%, 20=36.42%, 50=18.21% 00:36:32.393 lat (msec) : 100=0.66% 00:36:32.393 cpu : usr=3.38%, sys=5.76%, ctx=445, majf=0, minf=2 00:36:32.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:36:32.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:32.393 issued rwts: total=4608,4846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:32.393 job3: (groupid=0, jobs=1): err= 0: pid=2393801: Thu Dec 5 21:29:33 2024 00:36:32.393 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:36:32.393 slat (nsec): min=994, max=14611k, avg=83232.49, stdev=693442.51 00:36:32.393 clat (usec): min=3026, max=47197, avg=10952.00, stdev=5595.81 00:36:32.393 lat (usec): min=3033, max=47204, avg=11035.24, stdev=5648.44 00:36:32.393 clat percentiles (usec): 00:36:32.393 | 1.00th=[ 4359], 5.00th=[ 4817], 10.00th=[ 5538], 20.00th=[ 6587], 00:36:32.393 | 30.00th=[ 7439], 40.00th=[ 8455], 50.00th=[ 9765], 60.00th=[11076], 00:36:32.393 | 70.00th=[11994], 80.00th=[14484], 90.00th=[17171], 95.00th=[21365], 00:36:32.393 | 99.00th=[30802], 99.50th=[40109], 99.90th=[46400], 99.95th=[47449], 00:36:32.393 | 99.99th=[47449] 00:36:32.393 write: IOPS=6323, BW=24.7MiB/s (25.9MB/s)(24.9MiB/1008msec); 0 zone resets 00:36:32.393 slat (nsec): min=1660, max=12329k, avg=71247.25, stdev=512308.73 00:36:32.393 clat (usec): min=1205, max=48291, avg=9506.80, stdev=6031.19 00:36:32.393 lat (usec): min=1217, max=48300, avg=9578.05, stdev=6063.87 00:36:32.393 clat percentiles (usec): 00:36:32.393 | 1.00th=[ 3556], 5.00th=[ 4178], 10.00th=[ 5014], 20.00th=[ 6259], 00:36:32.393 | 30.00th=[ 6521], 40.00th=[ 6849], 50.00th=[ 7898], 60.00th=[ 8717], 00:36:32.393 | 70.00th=[ 9765], 80.00th=[11863], 90.00th=[13698], 95.00th=[20579], 00:36:32.393 | 99.00th=[39060], 99.50th=[45351], 99.90th=[47973], 99.95th=[47973], 00:36:32.393 | 99.99th=[48497] 00:36:32.393 bw ( KiB/s): min=18304, max=31672, per=30.36%, avg=24988.00, stdev=9452.60, samples=2 00:36:32.393 iops : min= 4576, max= 7918, avg=6247.00, stdev=2363.15, samples=2 00:36:32.393 lat (msec) : 2=0.08%, 4=1.97%, 10=59.50%, 20=32.28%, 50=6.17% 00:36:32.393 cpu : usr=5.06%, sys=6.06%, ctx=453, majf=0, minf=2 00:36:32.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:36:32.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:32.393 issued rwts: total=6144,6374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:32.393 00:36:32.393 Run status group 0 (all jobs): 00:36:32.393 READ: bw=77.3MiB/s (81.0MB/s), 17.9MiB/s-23.8MiB/s (18.7MB/s-25.0MB/s), io=80.6MiB (84.5MB), run=1004-1043msec 00:36:32.393 WRITE: bw=80.4MiB/s (84.3MB/s), 18.8MiB/s-24.7MiB/s (19.7MB/s-25.9MB/s), io=83.8MiB (87.9MB), run=1004-1043msec 00:36:32.393 00:36:32.393 Disk stats (read/write): 00:36:32.393 nvme0n1: ios=4146/4413, merge=0/0, ticks=30019/36263, in_queue=66282, util=96.19% 00:36:32.393 nvme0n2: ios=3615/4096, merge=0/0, ticks=47697/50497, in_queue=98194, util=86.85% 00:36:32.393 nvme0n3: ios=3584/3887, merge=0/0, ticks=39597/61330, in_queue=100927, util=88.37% 00:36:32.393 nvme0n4: ios=5356/5632, merge=0/0, ticks=53807/47251, in_queue=101058, util=89.41% 00:36:32.393 21:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:36:32.393 [global] 00:36:32.393 thread=1 00:36:32.393 invalidate=1 00:36:32.393 rw=randwrite 00:36:32.393 time_based=1 00:36:32.393 runtime=1 00:36:32.393 ioengine=libaio 00:36:32.393 direct=1 00:36:32.393 bs=4096 00:36:32.393 iodepth=128 00:36:32.393 norandommap=0 00:36:32.393 numjobs=1 00:36:32.393 00:36:32.393 verify_dump=1 00:36:32.393 verify_backlog=512 00:36:32.393 verify_state_save=0 00:36:32.393 do_verify=1 00:36:32.393 verify=crc32c-intel 00:36:32.393 [job0] 00:36:32.393 filename=/dev/nvme0n1 00:36:32.393 [job1] 00:36:32.393 filename=/dev/nvme0n2 00:36:32.393 [job2] 00:36:32.393 filename=/dev/nvme0n3 00:36:32.393 [job3] 00:36:32.393 filename=/dev/nvme0n4 00:36:32.393 Could not set queue depth (nvme0n1) 00:36:32.393 Could not set queue depth (nvme0n2) 00:36:32.393 Could not set queue depth (nvme0n3) 00:36:32.393 Could not set queue depth (nvme0n4) 00:36:32.650 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:32.650 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:32.650 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:32.650 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:32.650 fio-3.35 00:36:32.650 Starting 4 threads 00:36:34.026 00:36:34.026 job0: (groupid=0, jobs=1): err= 0: pid=2394224: Thu Dec 5 21:29:35 2024 00:36:34.026 read: IOPS=4377, BW=17.1MiB/s (17.9MB/s)(17.3MiB/1014msec) 00:36:34.026 slat (nsec): min=969, max=11588k, avg=97632.58, stdev=729768.59 00:36:34.026 clat (usec): min=656, max=87220, avg=11966.81, stdev=10674.90 00:36:34.026 lat (usec): min=682, max=87228, avg=12064.44, stdev=10769.61 00:36:34.026 clat percentiles (usec): 00:36:34.026 | 1.00th=[ 1385], 5.00th=[ 2474], 10.00th=[ 3818], 20.00th=[ 6325], 00:36:34.026 | 30.00th=[ 7177], 40.00th=[ 8160], 50.00th=[ 9110], 60.00th=[11994], 00:36:34.026 | 70.00th=[13042], 80.00th=[14877], 90.00th=[18220], 95.00th=[23462], 00:36:34.026 | 99.00th=[61080], 99.50th=[67634], 99.90th=[87557], 99.95th=[87557], 00:36:34.026 | 99.99th=[87557] 00:36:34.026 write: IOPS=4544, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1014msec); 0 zone resets 00:36:34.026 slat (nsec): min=1518, max=10965k, avg=106030.18, stdev=672230.39 00:36:34.026 clat (usec): min=232, max=87215, avg=16388.58, stdev=19370.99 00:36:34.026 lat (usec): min=326, max=87226, avg=16494.61, stdev=19495.10 00:36:34.026 clat percentiles (usec): 00:36:34.026 | 1.00th=[ 676], 5.00th=[ 1614], 10.00th=[ 3130], 20.00th=[ 5735], 00:36:34.026 | 30.00th=[ 6783], 40.00th=[ 8029], 50.00th=[ 9241], 60.00th=[10683], 00:36:34.026 | 70.00th=[12256], 80.00th=[20317], 90.00th=[50594], 95.00th=[70779], 00:36:34.026 | 99.00th=[80217], 99.50th=[80217], 99.90th=[81265], 99.95th=[81265], 00:36:34.026 | 99.99th=[87557] 00:36:34.026 bw ( KiB/s): min=18352, max=18512, per=22.23%, avg=18432.00, stdev=113.14, samples=2 00:36:34.026 iops : min= 4588, max= 4628, avg=4608.00, stdev=28.28, samples=2 00:36:34.026 lat (usec) : 250=0.01%, 500=0.10%, 750=0.77%, 1000=0.39% 00:36:34.026 lat (msec) : 2=4.11%, 4=6.46%, 10=43.48%, 20=29.67%, 50=8.40% 00:36:34.026 lat (msec) : 100=6.61% 00:36:34.026 cpu : usr=3.46%, sys=4.94%, ctx=342, majf=0, minf=1 00:36:34.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:36:34.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:34.026 issued rwts: total=4439,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:34.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:34.026 job1: (groupid=0, jobs=1): err= 0: pid=2394239: Thu Dec 5 21:29:35 2024 00:36:34.026 read: IOPS=8448, BW=33.0MiB/s (34.6MB/s)(33.3MiB/1008msec) 00:36:34.026 slat (nsec): min=902, max=7270.3k, avg=58904.94, stdev=475768.32 00:36:34.026 clat (usec): min=2564, max=16540, avg=7919.89, stdev=2196.72 00:36:34.026 lat (usec): min=2569, max=19213, avg=7978.80, stdev=2232.75 00:36:34.026 clat percentiles (usec): 00:36:34.026 | 1.00th=[ 3916], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 6194], 00:36:34.026 | 30.00th=[ 6980], 40.00th=[ 7373], 50.00th=[ 7635], 60.00th=[ 7832], 00:36:34.026 | 70.00th=[ 8094], 80.00th=[ 9110], 90.00th=[11338], 95.00th=[12649], 00:36:34.026 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15270], 99.95th=[15270], 00:36:34.026 | 99.99th=[16581] 00:36:34.026 write: IOPS=8634, BW=33.7MiB/s (35.4MB/s)(34.0MiB/1008msec); 0 zone resets 00:36:34.026 slat (nsec): min=1521, max=7763.5k, avg=52613.61, stdev=434320.03 00:36:34.026 clat (usec): min=885, max=14541, avg=6944.67, stdev=2026.34 00:36:34.026 lat (usec): min=934, max=14542, avg=6997.29, stdev=2044.14 00:36:34.026 clat percentiles (usec): 00:36:34.026 | 1.00th=[ 3195], 5.00th=[ 3621], 10.00th=[ 4621], 20.00th=[ 5080], 00:36:34.026 | 30.00th=[ 5735], 40.00th=[ 6325], 50.00th=[ 7111], 60.00th=[ 7439], 00:36:34.026 | 70.00th=[ 7767], 80.00th=[ 8094], 90.00th=[ 9765], 95.00th=[10683], 00:36:34.026 | 99.00th=[13304], 99.50th=[13698], 99.90th=[13698], 99.95th=[13698], 00:36:34.026 | 99.99th=[14484] 00:36:34.026 bw ( KiB/s): min=32632, max=37000, per=42.00%, avg=34816.00, stdev=3088.64, samples=2 00:36:34.026 iops : min= 8158, max= 9250, avg=8704.00, stdev=772.16, samples=2 00:36:34.026 lat (usec) : 1000=0.02% 00:36:34.026 lat (msec) : 2=0.09%, 4=3.61%, 10=84.27%, 20=12.02% 00:36:34.026 cpu : usr=3.97%, sys=9.53%, ctx=361, majf=0, minf=2 00:36:34.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:36:34.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:34.026 issued rwts: total=8516,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:34.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:34.026 job2: (groupid=0, jobs=1): err= 0: pid=2394259: Thu Dec 5 21:29:35 2024 00:36:34.026 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:36:34.026 slat (nsec): min=993, max=11559k, avg=111716.68, stdev=752797.61 00:36:34.026 clat (usec): min=2396, max=79654, avg=12015.03, stdev=11008.52 00:36:34.026 lat (usec): min=2406, max=79662, avg=12126.75, stdev=11137.34 00:36:34.026 clat percentiles (usec): 00:36:34.026 | 1.00th=[ 2704], 5.00th=[ 4146], 10.00th=[ 4424], 20.00th=[ 7242], 00:36:34.026 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 9241], 60.00th=[10028], 00:36:34.026 | 70.00th=[12649], 80.00th=[13042], 90.00th=[19268], 95.00th=[29230], 00:36:34.026 | 99.00th=[69731], 99.50th=[74974], 99.90th=[79168], 99.95th=[79168], 00:36:34.026 | 99.99th=[79168] 00:36:34.026 write: IOPS=3565, BW=13.9MiB/s (14.6MB/s)(14.1MiB/1012msec); 0 zone resets 00:36:34.026 slat (nsec): min=1550, max=20699k, avg=171342.60, stdev=925582.29 00:36:34.026 clat (usec): min=299, max=112280, avg=25351.77, stdev=27496.63 00:36:34.026 lat (usec): min=309, max=112289, avg=25523.11, stdev=27663.50 00:36:34.026 clat percentiles (usec): 00:36:34.026 | 1.00th=[ 1352], 5.00th=[ 2704], 10.00th=[ 5342], 20.00th=[ 6521], 00:36:34.026 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 12125], 60.00th=[ 19268], 00:36:34.026 | 70.00th=[ 21890], 80.00th=[ 56361], 90.00th=[ 72877], 95.00th=[ 81265], 00:36:34.026 | 99.00th=[101188], 99.50th=[105382], 99.90th=[108528], 99.95th=[112722], 00:36:34.026 | 99.99th=[112722] 00:36:34.026 bw ( KiB/s): min=11120, max=17304, per=17.14%, avg=14212.00, stdev=4372.75, samples=2 00:36:34.026 iops : min= 2780, max= 4326, avg=3553.00, stdev=1093.19, samples=2 00:36:34.026 lat (usec) : 500=0.13%, 750=0.09% 00:36:34.026 lat (msec) : 2=1.06%, 4=4.88%, 10=46.56%, 20=24.40%, 50=9.90% 00:36:34.026 lat (msec) : 100=12.13%, 250=0.85% 00:36:34.026 cpu : usr=2.18%, sys=3.76%, ctx=338, majf=0, minf=1 00:36:34.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:36:34.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:34.026 issued rwts: total=3072,3608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:34.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:34.026 job3: (groupid=0, jobs=1): err= 0: pid=2394265: Thu Dec 5 21:29:35 2024 00:36:34.026 read: IOPS=3776, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1013msec) 00:36:34.026 slat (nsec): min=1006, max=16386k, avg=124373.01, stdev=886669.17 00:36:34.026 clat (usec): min=6199, max=92410, avg=14298.43, stdev=12072.90 00:36:34.026 lat (usec): min=6207, max=92422, avg=14422.80, stdev=12197.81 00:36:34.026 clat percentiles (usec): 00:36:34.026 | 1.00th=[ 6259], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 7701], 00:36:34.027 | 30.00th=[ 7963], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[12649], 00:36:34.027 | 70.00th=[15401], 80.00th=[17433], 90.00th=[18744], 95.00th=[33424], 00:36:34.027 | 99.00th=[77071], 99.50th=[84411], 99.90th=[92799], 99.95th=[92799], 00:36:34.027 | 99.99th=[92799] 00:36:34.027 write: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec); 0 zone resets 00:36:34.027 slat (nsec): min=1610, max=9995.5k, avg=123222.82, stdev=691574.52 00:36:34.027 clat (usec): min=1184, max=92418, avg=18023.48, stdev=18180.95 00:36:34.027 lat (usec): min=1195, max=92428, avg=18146.70, stdev=18270.39 00:36:34.027 clat percentiles (usec): 00:36:34.027 | 1.00th=[ 4424], 5.00th=[ 5145], 10.00th=[ 6390], 20.00th=[ 6783], 00:36:34.027 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 9372], 60.00th=[13304], 00:36:34.027 | 70.00th=[19792], 80.00th=[24511], 90.00th=[44827], 95.00th=[60556], 00:36:34.027 | 99.00th=[87557], 99.50th=[88605], 99.90th=[90702], 99.95th=[90702], 00:36:34.027 | 99.99th=[92799] 00:36:34.027 bw ( KiB/s): min=12288, max=20480, per=19.76%, avg=16384.00, stdev=5792.62, samples=2 00:36:34.027 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:36:34.027 lat (msec) : 2=0.03%, 4=0.21%, 10=49.84%, 20=30.61%, 50=13.52% 00:36:34.027 lat (msec) : 100=5.79% 00:36:34.027 cpu : usr=2.37%, sys=4.94%, ctx=301, majf=0, minf=1 00:36:34.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:36:34.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:34.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:34.027 issued rwts: total=3826,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:34.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:34.027 00:36:34.027 Run status group 0 (all jobs): 00:36:34.027 READ: bw=76.5MiB/s (80.2MB/s), 11.9MiB/s-33.0MiB/s (12.4MB/s-34.6MB/s), io=77.6MiB (81.3MB), run=1008-1014msec 00:36:34.027 WRITE: bw=81.0MiB/s (84.9MB/s), 13.9MiB/s-33.7MiB/s (14.6MB/s-35.4MB/s), io=82.1MiB (86.1MB), run=1008-1014msec 00:36:34.027 00:36:34.027 Disk stats (read/write): 00:36:34.027 nvme0n1: ios=3372/3584, merge=0/0, ticks=39865/62946, in_queue=102811, util=95.99% 00:36:34.027 nvme0n2: ios=7215/7470, merge=0/0, ticks=52774/48062, in_queue=100836, util=92.86% 00:36:34.027 nvme0n3: ios=2093/2527, merge=0/0, ticks=23185/75782, in_queue=98967, util=91.66% 00:36:34.027 nvme0n4: ios=3611/3639, merge=0/0, ticks=45006/56303, in_queue=101309, util=91.12% 00:36:34.027 21:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:36:34.027 21:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2394464 00:36:34.027 21:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:36:34.027 21:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:36:34.027 [global] 00:36:34.027 thread=1 00:36:34.027 invalidate=1 00:36:34.027 rw=read 00:36:34.027 time_based=1 00:36:34.027 runtime=10 00:36:34.027 ioengine=libaio 00:36:34.027 direct=1 00:36:34.027 bs=4096 00:36:34.027 iodepth=1 00:36:34.027 norandommap=1 00:36:34.027 numjobs=1 00:36:34.027 00:36:34.027 [job0] 00:36:34.027 filename=/dev/nvme0n1 00:36:34.027 [job1] 00:36:34.027 filename=/dev/nvme0n2 00:36:34.027 [job2] 00:36:34.027 filename=/dev/nvme0n3 00:36:34.027 [job3] 00:36:34.027 filename=/dev/nvme0n4 00:36:34.027 Could not set queue depth (nvme0n1) 00:36:34.027 Could not set queue depth (nvme0n2) 00:36:34.027 Could not set queue depth (nvme0n3) 00:36:34.027 Could not set queue depth (nvme0n4) 00:36:34.286 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:34.286 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:34.286 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:34.286 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:34.286 fio-3.35 00:36:34.286 Starting 4 threads 00:36:37.585 21:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:36:37.585 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=16322560, buflen=4096 00:36:37.585 fio: pid=2394714, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:37.585 21:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:36:37.585 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10338304, buflen=4096 00:36:37.585 fio: pid=2394707, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:37.585 21:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:37.585 21:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:36:37.585 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=14102528, buflen=4096 00:36:37.585 fio: pid=2394679, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:37.585 21:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:37.585 21:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:36:37.585 21:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:37.585 21:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:36:37.585 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12369920, buflen=4096 00:36:37.585 fio: pid=2394691, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:37.845 00:36:37.845 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2394679: Thu Dec 5 21:29:39 2024 00:36:37.845 read: IOPS=1163, BW=4654KiB/s (4766kB/s)(13.4MiB/2959msec) 00:36:37.845 slat (usec): min=6, max=28091, avg=36.92, stdev=521.04 00:36:37.845 clat (usec): min=410, max=41480, avg=810.49, stdev=976.83 00:36:37.845 lat (usec): min=436, max=48526, avg=847.41, stdev=1180.09 00:36:37.845 clat percentiles (usec): 00:36:37.845 | 1.00th=[ 570], 5.00th=[ 668], 10.00th=[ 693], 20.00th=[ 725], 00:36:37.845 | 30.00th=[ 766], 40.00th=[ 783], 50.00th=[ 799], 60.00th=[ 807], 00:36:37.845 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 865], 95.00th=[ 889], 00:36:37.845 | 99.00th=[ 930], 99.50th=[ 963], 99.90th=[ 1237], 99.95th=[40633], 00:36:37.845 | 99.99th=[41681] 00:36:37.845 bw ( KiB/s): min= 4504, max= 4968, per=29.44%, avg=4840.00, stdev=191.42, samples=5 00:36:37.845 iops : min= 1126, max= 1242, avg=1210.00, stdev=47.85, samples=5 00:36:37.845 lat (usec) : 500=0.20%, 750=25.70%, 1000=73.66% 00:36:37.845 lat (msec) : 2=0.32%, 4=0.03%, 50=0.06% 00:36:37.845 cpu : usr=0.95%, sys=3.28%, ctx=3448, majf=0, minf=2 00:36:37.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.845 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.845 issued rwts: total=3444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:37.845 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2394691: Thu Dec 5 21:29:39 2024 00:36:37.845 read: IOPS=957, BW=3828KiB/s (3919kB/s)(11.8MiB/3156msec) 00:36:37.845 slat (usec): min=6, max=9443, avg=33.79, stdev=265.25 00:36:37.845 clat (usec): min=254, max=41848, avg=1001.29, stdev=2446.32 00:36:37.845 lat (usec): min=280, max=50491, avg=1035.08, stdev=2507.54 00:36:37.845 clat percentiles (usec): 00:36:37.845 | 1.00th=[ 506], 5.00th=[ 586], 10.00th=[ 627], 20.00th=[ 701], 00:36:37.845 | 30.00th=[ 766], 40.00th=[ 816], 50.00th=[ 873], 60.00th=[ 930], 00:36:37.845 | 70.00th=[ 971], 80.00th=[ 996], 90.00th=[ 1037], 95.00th=[ 1074], 00:36:37.845 | 99.00th=[ 1139], 99.50th=[ 1221], 99.90th=[41681], 99.95th=[41681], 00:36:37.845 | 99.99th=[41681] 00:36:37.845 bw ( KiB/s): min= 2401, max= 4512, per=24.07%, avg=3957.50, stdev=785.85, samples=6 00:36:37.845 iops : min= 600, max= 1128, avg=989.33, stdev=196.56, samples=6 00:36:37.845 lat (usec) : 500=0.93%, 750=26.68%, 1000=53.10% 00:36:37.845 lat (msec) : 2=18.80%, 10=0.10%, 50=0.36% 00:36:37.845 cpu : usr=0.95%, sys=2.95%, ctx=3025, majf=0, minf=2 00:36:37.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.845 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.845 issued rwts: total=3021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:37.845 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2394707: Thu Dec 5 21:29:39 2024 00:36:37.845 read: IOPS=907, BW=3629KiB/s (3716kB/s)(9.86MiB/2782msec) 00:36:37.845 slat (nsec): min=6561, max=61207, avg=26553.07, stdev=4334.19 00:36:37.845 clat (usec): min=386, max=41975, avg=1060.21, stdev=1803.65 00:36:37.845 lat (usec): min=408, max=42002, avg=1086.77, stdev=1803.71 00:36:37.845 clat percentiles (usec): 00:36:37.845 | 1.00th=[ 611], 5.00th=[ 742], 10.00th=[ 799], 20.00th=[ 873], 00:36:37.845 | 30.00th=[ 922], 40.00th=[ 955], 50.00th=[ 988], 60.00th=[ 1020], 00:36:37.845 | 70.00th=[ 1057], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1188], 00:36:37.845 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[41157], 99.95th=[41681], 00:36:37.845 | 99.99th=[42206] 00:36:37.845 bw ( KiB/s): min= 3600, max= 3776, per=22.34%, avg=3673.60, stdev=70.29, samples=5 00:36:37.845 iops : min= 900, max= 944, avg=918.40, stdev=17.57, samples=5 00:36:37.845 lat (usec) : 500=0.20%, 750=5.66%, 1000=48.36% 00:36:37.845 lat (msec) : 2=45.50%, 4=0.04%, 50=0.20% 00:36:37.845 cpu : usr=1.51%, sys=3.42%, ctx=2525, majf=0, minf=2 00:36:37.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.845 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.845 issued rwts: total=2525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:37.845 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2394714: Thu Dec 5 21:29:39 2024 00:36:37.845 read: IOPS=1529, BW=6117KiB/s (6263kB/s)(15.6MiB/2606msec) 00:36:37.845 slat (nsec): min=6816, max=62171, avg=23431.25, stdev=7480.55 00:36:37.845 clat (usec): min=164, max=2804, avg=616.98, stdev=91.97 00:36:37.845 lat (usec): min=172, max=2811, avg=640.41, stdev=93.17 00:36:37.845 clat percentiles (usec): 00:36:37.845 | 1.00th=[ 326], 5.00th=[ 429], 10.00th=[ 515], 20.00th=[ 562], 00:36:37.845 | 30.00th=[ 611], 40.00th=[ 627], 50.00th=[ 644], 60.00th=[ 652], 00:36:37.845 | 70.00th=[ 660], 80.00th=[ 676], 90.00th=[ 693], 95.00th=[ 709], 00:36:37.846 | 99.00th=[ 734], 99.50th=[ 758], 99.90th=[ 791], 99.95th=[ 873], 00:36:37.846 | 99.99th=[ 2802] 00:36:37.846 bw ( KiB/s): min= 6128, max= 6312, per=37.73%, avg=6204.80, stdev=74.32, samples=5 00:36:37.846 iops : min= 1532, max= 1578, avg=1551.20, stdev=18.58, samples=5 00:36:37.846 lat (usec) : 250=0.40%, 500=8.28%, 750=90.74%, 1000=0.53% 00:36:37.846 lat (msec) : 4=0.03% 00:36:37.846 cpu : usr=1.31%, sys=4.41%, ctx=3986, majf=0, minf=1 00:36:37.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:37.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.846 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:37.846 issued rwts: total=3986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:37.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:37.846 00:36:37.846 Run status group 0 (all jobs): 00:36:37.846 READ: bw=16.1MiB/s (16.8MB/s), 3629KiB/s-6117KiB/s (3716kB/s-6263kB/s), io=50.7MiB (53.1MB), run=2606-3156msec 00:36:37.846 00:36:37.846 Disk stats (read/write): 00:36:37.846 nvme0n1: ios=3386/0, merge=0/0, ticks=2647/0, in_queue=2647, util=93.59% 00:36:37.846 nvme0n2: ios=3018/0, merge=0/0, ticks=2848/0, in_queue=2848, util=94.95% 00:36:37.846 nvme0n3: ios=2376/0, merge=0/0, ticks=2282/0, in_queue=2282, util=96.03% 00:36:37.846 nvme0n4: ios=3986/0, merge=0/0, ticks=2432/0, in_queue=2432, util=96.31% 00:36:37.846 21:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:37.846 21:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:36:38.107 21:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:38.107 21:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:36:38.366 21:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:38.366 21:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:36:38.366 21:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:38.366 21:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:36:38.626 21:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:36:38.626 21:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2394464 00:36:38.626 21:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:36:38.626 21:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:38.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:38.626 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:38.626 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:36:38.626 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:38.626 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:38.626 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:38.626 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:38.626 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:36:38.626 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:36:38.626 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:36:38.626 nvmf hotplug test: fio failed as expected 00:36:38.626 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:38.886 rmmod nvme_tcp 00:36:38.886 rmmod nvme_fabrics 00:36:38.886 rmmod nvme_keyring 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2391290 ']' 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2391290 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2391290 ']' 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2391290 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:38.886 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2391290 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2391290' 00:36:39.146 killing process with pid 2391290 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2391290 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2391290 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:39.146 21:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:41.691 00:36:41.691 real 0m28.740s 00:36:41.691 user 2m18.006s 00:36:41.691 sys 0m13.206s 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:41.691 ************************************ 00:36:41.691 END TEST nvmf_fio_target 00:36:41.691 ************************************ 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:41.691 ************************************ 00:36:41.691 START TEST nvmf_bdevio 00:36:41.691 ************************************ 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:41.691 * Looking for test storage... 00:36:41.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:41.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.691 --rc genhtml_branch_coverage=1 00:36:41.691 --rc genhtml_function_coverage=1 00:36:41.691 --rc genhtml_legend=1 00:36:41.691 --rc geninfo_all_blocks=1 00:36:41.691 --rc geninfo_unexecuted_blocks=1 00:36:41.691 00:36:41.691 ' 00:36:41.691 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:41.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.691 --rc genhtml_branch_coverage=1 00:36:41.691 --rc genhtml_function_coverage=1 00:36:41.692 --rc genhtml_legend=1 00:36:41.692 --rc geninfo_all_blocks=1 00:36:41.692 --rc geninfo_unexecuted_blocks=1 00:36:41.692 00:36:41.692 ' 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:41.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.692 --rc genhtml_branch_coverage=1 00:36:41.692 --rc genhtml_function_coverage=1 00:36:41.692 --rc genhtml_legend=1 00:36:41.692 --rc geninfo_all_blocks=1 00:36:41.692 --rc geninfo_unexecuted_blocks=1 00:36:41.692 00:36:41.692 ' 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:41.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.692 --rc genhtml_branch_coverage=1 00:36:41.692 --rc genhtml_function_coverage=1 00:36:41.692 --rc genhtml_legend=1 00:36:41.692 --rc geninfo_all_blocks=1 00:36:41.692 --rc geninfo_unexecuted_blocks=1 00:36:41.692 00:36:41.692 ' 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:36:41.692 21:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:49.859 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:49.859 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:49.859 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:49.860 Found net devices under 0000:31:00.0: cvl_0_0 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:49.860 Found net devices under 0000:31:00.1: cvl_0_1 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:49.860 21:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:49.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:49.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:36:49.860 00:36:49.860 --- 10.0.0.2 ping statistics --- 00:36:49.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.860 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:49.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:49.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:36:49.860 00:36:49.860 --- 10.0.0.1 ping statistics --- 00:36:49.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.860 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2400354 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2400354 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2400354 ']' 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:49.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:49.860 21:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:50.121 [2024-12-05 21:29:51.303115] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:50.121 [2024-12-05 21:29:51.304121] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:36:50.121 [2024-12-05 21:29:51.304159] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:50.121 [2024-12-05 21:29:51.409548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:50.121 [2024-12-05 21:29:51.451916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:50.121 [2024-12-05 21:29:51.451958] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:50.121 [2024-12-05 21:29:51.451966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:50.121 [2024-12-05 21:29:51.451974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:50.121 [2024-12-05 21:29:51.451980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:50.122 [2024-12-05 21:29:51.453831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:50.122 [2024-12-05 21:29:51.453987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:50.122 [2024-12-05 21:29:51.454272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:50.122 [2024-12-05 21:29:51.454274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:50.122 [2024-12-05 21:29:51.538299] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:50.122 [2024-12-05 21:29:51.538513] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:50.122 [2024-12-05 21:29:51.539526] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:50.122 [2024-12-05 21:29:51.539751] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:50.122 [2024-12-05 21:29:51.539828] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:50.693 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:50.693 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:36:50.693 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:50.693 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:50.693 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:50.953 [2024-12-05 21:29:52.163291] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:50.953 Malloc0 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:50.953 [2024-12-05 21:29:52.267538] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:50.953 { 00:36:50.953 "params": { 00:36:50.953 "name": "Nvme$subsystem", 00:36:50.953 "trtype": "$TEST_TRANSPORT", 00:36:50.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:50.953 "adrfam": "ipv4", 00:36:50.953 "trsvcid": "$NVMF_PORT", 00:36:50.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:50.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:50.953 "hdgst": ${hdgst:-false}, 00:36:50.953 "ddgst": ${ddgst:-false} 00:36:50.953 }, 00:36:50.953 "method": "bdev_nvme_attach_controller" 00:36:50.953 } 00:36:50.953 EOF 00:36:50.953 )") 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:36:50.953 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:36:50.954 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:36:50.954 21:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:50.954 "params": { 00:36:50.954 "name": "Nvme1", 00:36:50.954 "trtype": "tcp", 00:36:50.954 "traddr": "10.0.0.2", 00:36:50.954 "adrfam": "ipv4", 00:36:50.954 "trsvcid": "4420", 00:36:50.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:50.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:50.954 "hdgst": false, 00:36:50.954 "ddgst": false 00:36:50.954 }, 00:36:50.954 "method": "bdev_nvme_attach_controller" 00:36:50.954 }' 00:36:50.954 [2024-12-05 21:29:52.325722] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:36:50.954 [2024-12-05 21:29:52.325785] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400495 ] 00:36:51.211 [2024-12-05 21:29:52.408875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:51.211 [2024-12-05 21:29:52.452947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.211 [2024-12-05 21:29:52.453176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:51.212 [2024-12-05 21:29:52.453180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:51.212 I/O targets: 00:36:51.212 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:36:51.212 00:36:51.212 00:36:51.212 CUnit - A unit testing framework for C - Version 2.1-3 00:36:51.212 http://cunit.sourceforge.net/ 00:36:51.212 00:36:51.212 00:36:51.212 Suite: bdevio tests on: Nvme1n1 00:36:51.212 Test: blockdev write read block ...passed 00:36:51.470 Test: blockdev write zeroes read block ...passed 00:36:51.470 Test: blockdev write zeroes read no split ...passed 00:36:51.470 Test: blockdev write zeroes read split ...passed 00:36:51.470 Test: blockdev write zeroes read split partial ...passed 00:36:51.470 Test: blockdev reset ...[2024-12-05 21:29:52.842277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:36:51.470 [2024-12-05 21:29:52.842341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a80e0 (9): Bad file descriptor 00:36:51.470 [2024-12-05 21:29:52.890509] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:36:51.470 passed 00:36:51.470 Test: blockdev write read 8 blocks ...passed 00:36:51.728 Test: blockdev write read size > 128k ...passed 00:36:51.728 Test: blockdev write read invalid size ...passed 00:36:51.728 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:51.728 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:51.728 Test: blockdev write read max offset ...passed 00:36:51.728 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:51.728 Test: blockdev writev readv 8 blocks ...passed 00:36:51.728 Test: blockdev writev readv 30 x 1block ...passed 00:36:51.987 Test: blockdev writev readv block ...passed 00:36:51.987 Test: blockdev writev readv size > 128k ...passed 00:36:51.987 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:51.987 Test: blockdev comparev and writev ...[2024-12-05 21:29:53.193003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:51.987 [2024-12-05 21:29:53.193029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:51.987 [2024-12-05 21:29:53.193040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:51.987 [2024-12-05 21:29:53.193046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.987 [2024-12-05 21:29:53.193477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:51.987 [2024-12-05 21:29:53.193485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:51.987 [2024-12-05 21:29:53.193494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:51.987 [2024-12-05 21:29:53.193500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:51.987 [2024-12-05 21:29:53.193936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:51.987 [2024-12-05 21:29:53.193944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:51.987 [2024-12-05 21:29:53.193954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:51.987 [2024-12-05 21:29:53.193959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:51.987 [2024-12-05 21:29:53.194385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:51.987 [2024-12-05 21:29:53.194393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:51.987 [2024-12-05 21:29:53.194403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:51.987 [2024-12-05 21:29:53.194408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:51.987 passed 00:36:51.987 Test: blockdev nvme passthru rw ...passed 00:36:51.987 Test: blockdev nvme passthru vendor specific ...[2024-12-05 21:29:53.277349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:51.987 [2024-12-05 21:29:53.277360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:51.987 [2024-12-05 21:29:53.277590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:51.987 [2024-12-05 21:29:53.277598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:51.987 [2024-12-05 21:29:53.277850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:51.987 [2024-12-05 21:29:53.277857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:51.987 [2024-12-05 21:29:53.278128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:51.987 [2024-12-05 21:29:53.278139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:51.987 passed 00:36:51.987 Test: blockdev nvme admin passthru ...passed 00:36:51.987 Test: blockdev copy ...passed 00:36:51.987 00:36:51.987 Run Summary: Type Total Ran Passed Failed Inactive 00:36:51.987 suites 1 1 n/a 0 0 00:36:51.987 tests 23 23 23 0 0 00:36:51.987 asserts 152 152 152 0 n/a 00:36:51.987 00:36:51.987 Elapsed time = 1.493 seconds 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:52.248 rmmod nvme_tcp 00:36:52.248 rmmod nvme_fabrics 00:36:52.248 rmmod nvme_keyring 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2400354 ']' 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2400354 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2400354 ']' 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2400354 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2400354 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2400354' 00:36:52.248 killing process with pid 2400354 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2400354 00:36:52.248 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2400354 00:36:52.510 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:52.510 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:52.510 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:52.510 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:36:52.510 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:36:52.510 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:52.510 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:36:52.510 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:52.510 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:52.510 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:52.510 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:52.510 21:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.425 21:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:54.686 00:36:54.686 real 0m13.207s 00:36:54.686 user 0m10.196s 00:36:54.686 sys 0m7.071s 00:36:54.686 21:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:54.686 21:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:54.686 ************************************ 00:36:54.686 END TEST nvmf_bdevio 00:36:54.686 ************************************ 00:36:54.686 21:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:36:54.686 00:36:54.686 real 5m9.797s 00:36:54.686 user 10m18.425s 00:36:54.686 sys 2m10.618s 00:36:54.686 21:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:54.686 21:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:54.686 ************************************ 00:36:54.686 END TEST nvmf_target_core_interrupt_mode 00:36:54.686 ************************************ 00:36:54.686 21:29:55 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:54.686 21:29:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:54.686 21:29:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:54.686 21:29:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:54.686 ************************************ 00:36:54.686 START TEST nvmf_interrupt 00:36:54.686 ************************************ 00:36:54.686 21:29:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:54.686 * Looking for test storage... 00:36:54.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:54.686 21:29:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:54.686 21:29:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:36:54.686 21:29:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:54.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.947 --rc genhtml_branch_coverage=1 00:36:54.947 --rc genhtml_function_coverage=1 00:36:54.947 --rc genhtml_legend=1 00:36:54.947 --rc geninfo_all_blocks=1 00:36:54.947 --rc geninfo_unexecuted_blocks=1 00:36:54.947 00:36:54.947 ' 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:54.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.947 --rc genhtml_branch_coverage=1 00:36:54.947 --rc genhtml_function_coverage=1 00:36:54.947 --rc genhtml_legend=1 00:36:54.947 --rc geninfo_all_blocks=1 00:36:54.947 --rc geninfo_unexecuted_blocks=1 00:36:54.947 00:36:54.947 ' 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:54.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.947 --rc genhtml_branch_coverage=1 00:36:54.947 --rc genhtml_function_coverage=1 00:36:54.947 --rc genhtml_legend=1 00:36:54.947 --rc geninfo_all_blocks=1 00:36:54.947 --rc geninfo_unexecuted_blocks=1 00:36:54.947 00:36:54.947 ' 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:54.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.947 --rc genhtml_branch_coverage=1 00:36:54.947 --rc genhtml_function_coverage=1 00:36:54.947 --rc genhtml_legend=1 00:36:54.947 --rc geninfo_all_blocks=1 00:36:54.947 --rc geninfo_unexecuted_blocks=1 00:36:54.947 00:36:54.947 ' 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.947 21:29:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:36:54.948 21:29:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:03.118 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:03.118 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:03.118 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:03.119 Found net devices under 0000:31:00.0: cvl_0_0 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:03.119 Found net devices under 0000:31:00.1: cvl_0_1 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:03.119 21:30:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:03.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:03.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:37:03.119 00:37:03.119 --- 10.0.0.2 ping statistics --- 00:37:03.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:03.119 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:03.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:03.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:37:03.119 00:37:03.119 --- 10.0.0.1 ping statistics --- 00:37:03.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:03.119 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2405519 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2405519 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2405519 ']' 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:03.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:03.119 21:30:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:03.119 [2024-12-05 21:30:04.283141] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:03.119 [2024-12-05 21:30:04.284315] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:37:03.119 [2024-12-05 21:30:04.284366] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:03.119 [2024-12-05 21:30:04.375185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:03.119 [2024-12-05 21:30:04.415655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:03.119 [2024-12-05 21:30:04.415693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:03.119 [2024-12-05 21:30:04.415701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:03.119 [2024-12-05 21:30:04.415708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:03.119 [2024-12-05 21:30:04.415713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:03.119 [2024-12-05 21:30:04.416911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:03.119 [2024-12-05 21:30:04.416933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:03.119 [2024-12-05 21:30:04.473815] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:03.119 [2024-12-05 21:30:04.474448] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:03.119 [2024-12-05 21:30:04.474749] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:03.691 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:03.691 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:37:03.691 21:30:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:03.691 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:03.691 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:03.691 21:30:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:03.691 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:37:03.951 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:37:03.951 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:03.951 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:37:03.951 5000+0 records in 00:37:03.951 5000+0 records out 00:37:03.951 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0182651 s, 561 MB/s 00:37:03.951 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:37:03.951 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.951 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:03.951 AIO0 00:37:03.951 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.951 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:37:03.951 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.951 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:03.951 [2024-12-05 21:30:05.201629] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:03.951 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.951 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:03.951 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.951 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:03.951 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:03.952 [2024-12-05 21:30:05.242009] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2405519 0 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2405519 0 idle 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2405519 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2405519 -w 256 00:37:03.952 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2405519 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.26 reactor_0' 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2405519 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.26 reactor_0 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2405519 1 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2405519 1 idle 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2405519 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2405519 -w 256 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2405530 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2405530 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2405889 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2405519 0 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2405519 0 busy 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2405519 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2405519 -w 256 00:37:04.213 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2405519 root 20 0 128.2g 44928 32256 R 73.3 0.0 0:00.37 reactor_0' 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2405519 root 20 0 128.2g 44928 32256 R 73.3 0.0 0:00.37 reactor_0 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=73.3 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=73 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2405519 1 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2405519 1 busy 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2405519 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:04.473 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:04.474 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:04.474 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:04.474 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:04.474 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2405519 -w 256 00:37:04.474 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:04.736 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2405530 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.24 reactor_1' 00:37:04.736 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2405530 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.24 reactor_1 00:37:04.736 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:04.736 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:04.736 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:37:04.736 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:37:04.736 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:04.736 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:04.736 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:04.736 21:30:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:04.736 21:30:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2405889 00:37:14.748 Initializing NVMe Controllers 00:37:14.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:14.748 Controller IO queue size 256, less than required. 00:37:14.748 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:14.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:14.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:14.748 Initialization complete. Launching workers. 00:37:14.748 ======================================================== 00:37:14.748 Latency(us) 00:37:14.748 Device Information : IOPS MiB/s Average min max 00:37:14.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16863.80 65.87 15188.85 2402.60 18449.98 00:37:14.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 20436.80 79.83 12528.13 7305.53 30708.17 00:37:14.748 ======================================================== 00:37:14.748 Total : 37300.60 145.71 13731.06 2402.60 30708.17 00:37:14.748 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2405519 0 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2405519 0 idle 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2405519 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2405519 -w 256 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2405519 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.26 reactor_0' 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2405519 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.26 reactor_0 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2405519 1 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2405519 1 idle 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2405519 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2405519 -w 256 00:37:14.748 21:30:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:14.748 21:30:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2405530 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:37:14.748 21:30:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2405530 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:37:14.748 21:30:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:14.748 21:30:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:14.748 21:30:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:14.748 21:30:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:14.748 21:30:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:14.749 21:30:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:14.749 21:30:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:14.749 21:30:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:14.749 21:30:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:15.321 21:30:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:37:15.321 21:30:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:37:15.321 21:30:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:15.321 21:30:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:37:15.321 21:30:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2405519 0 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2405519 0 idle 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2405519 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2405519 -w 256 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2405519 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.55 reactor_0' 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2405519 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.55 reactor_0 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2405519 1 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2405519 1 idle 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2405519 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:17.867 21:30:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2405519 -w 256 00:37:17.867 21:30:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2405530 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:37:17.867 21:30:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2405530 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:37:17.867 21:30:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:17.867 21:30:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:17.867 21:30:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:17.867 21:30:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:17.867 21:30:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:17.867 21:30:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:17.867 21:30:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:17.867 21:30:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:17.867 21:30:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:18.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:18.129 rmmod nvme_tcp 00:37:18.129 rmmod nvme_fabrics 00:37:18.129 rmmod nvme_keyring 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2405519 ']' 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2405519 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2405519 ']' 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2405519 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:18.129 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2405519 00:37:18.391 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:18.391 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:18.391 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2405519' 00:37:18.391 killing process with pid 2405519 00:37:18.391 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2405519 00:37:18.391 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2405519 00:37:18.391 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:18.391 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:18.391 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:18.392 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:37:18.392 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:37:18.392 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:18.392 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:37:18.392 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:18.392 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:18.392 21:30:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.392 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:18.392 21:30:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.940 21:30:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:20.940 00:37:20.940 real 0m25.829s 00:37:20.940 user 0m40.381s 00:37:20.940 sys 0m9.914s 00:37:20.940 21:30:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:20.940 21:30:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:20.940 ************************************ 00:37:20.940 END TEST nvmf_interrupt 00:37:20.940 ************************************ 00:37:20.940 00:37:20.940 real 31m0.673s 00:37:20.940 user 61m35.778s 00:37:20.940 sys 10m52.334s 00:37:20.940 21:30:21 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:20.940 21:30:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:20.940 ************************************ 00:37:20.940 END TEST nvmf_tcp 00:37:20.940 ************************************ 00:37:20.940 21:30:21 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:37:20.940 21:30:21 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:20.940 21:30:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:20.940 21:30:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:20.940 21:30:21 -- common/autotest_common.sh@10 -- # set +x 00:37:20.940 ************************************ 00:37:20.940 START TEST spdkcli_nvmf_tcp 00:37:20.940 ************************************ 00:37:20.940 21:30:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:20.940 * Looking for test storage... 00:37:20.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:20.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.940 --rc genhtml_branch_coverage=1 00:37:20.940 --rc genhtml_function_coverage=1 00:37:20.940 --rc genhtml_legend=1 00:37:20.940 --rc geninfo_all_blocks=1 00:37:20.940 --rc geninfo_unexecuted_blocks=1 00:37:20.940 00:37:20.940 ' 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:20.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.940 --rc genhtml_branch_coverage=1 00:37:20.940 --rc genhtml_function_coverage=1 00:37:20.940 --rc genhtml_legend=1 00:37:20.940 --rc geninfo_all_blocks=1 00:37:20.940 --rc geninfo_unexecuted_blocks=1 00:37:20.940 00:37:20.940 ' 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:20.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.940 --rc genhtml_branch_coverage=1 00:37:20.940 --rc genhtml_function_coverage=1 00:37:20.940 --rc genhtml_legend=1 00:37:20.940 --rc geninfo_all_blocks=1 00:37:20.940 --rc geninfo_unexecuted_blocks=1 00:37:20.940 00:37:20.940 ' 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:20.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.940 --rc genhtml_branch_coverage=1 00:37:20.940 --rc genhtml_function_coverage=1 00:37:20.940 --rc genhtml_legend=1 00:37:20.940 --rc geninfo_all_blocks=1 00:37:20.940 --rc geninfo_unexecuted_blocks=1 00:37:20.940 00:37:20.940 ' 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:20.940 21:30:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:20.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2409513 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2409513 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2409513 ']' 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:20.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:20.941 21:30:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:20.941 [2024-12-05 21:30:22.237308] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:37:20.941 [2024-12-05 21:30:22.237384] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409513 ] 00:37:20.941 [2024-12-05 21:30:22.319911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:20.941 [2024-12-05 21:30:22.362676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:20.941 [2024-12-05 21:30:22.362680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:21.878 21:30:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:21.878 21:30:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:37:21.878 21:30:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:21.878 21:30:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:21.878 21:30:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:21.878 21:30:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:21.878 21:30:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:21.878 21:30:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:21.878 21:30:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:21.878 21:30:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:21.878 21:30:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:21.878 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:21.878 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:21.878 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:21.878 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:21.878 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:21.878 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:21.878 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:21.878 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:21.878 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:21.878 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:21.878 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:21.878 ' 00:37:24.421 [2024-12-05 21:30:25.710147] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:25.804 [2024-12-05 21:30:26.918164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:27.720 [2024-12-05 21:30:29.136547] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:29.659 [2024-12-05 21:30:31.042472] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:31.574 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:31.575 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:31.575 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:31.575 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:31.575 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:31.575 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:31.575 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:31.575 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:31.575 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:31.575 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:31.575 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:31.575 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:31.575 21:30:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:31.575 21:30:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:31.575 21:30:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:31.575 21:30:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:31.575 21:30:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:31.575 21:30:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:31.575 21:30:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:31.575 21:30:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:31.836 21:30:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:31.836 21:30:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:31.836 21:30:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:31.836 21:30:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:31.836 21:30:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:31.836 21:30:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:31.836 21:30:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:31.836 21:30:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:31.836 21:30:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:31.836 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:31.836 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:31.836 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:31.836 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:31.836 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:31.836 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:31.836 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:31.836 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:31.836 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:31.836 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:31.836 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:31.836 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:31.836 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:31.836 ' 00:37:37.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:37.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:37.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:37.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:37.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:37.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:37.290 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:37.290 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:37.290 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:37.290 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:37.290 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:37.290 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:37.290 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:37.290 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2409513 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2409513 ']' 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2409513 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2409513 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2409513' 00:37:37.290 killing process with pid 2409513 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2409513 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2409513 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2409513 ']' 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2409513 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2409513 ']' 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2409513 00:37:37.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2409513) - No such process 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2409513 is not found' 00:37:37.290 Process with pid 2409513 is not found 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:37.290 00:37:37.290 real 0m16.454s 00:37:37.290 user 0m34.307s 00:37:37.290 sys 0m0.754s 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:37.290 21:30:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:37.290 ************************************ 00:37:37.290 END TEST spdkcli_nvmf_tcp 00:37:37.290 ************************************ 00:37:37.290 21:30:38 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:37.290 21:30:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:37.290 21:30:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:37.290 21:30:38 -- common/autotest_common.sh@10 -- # set +x 00:37:37.290 ************************************ 00:37:37.290 START TEST nvmf_identify_passthru 00:37:37.290 ************************************ 00:37:37.290 21:30:38 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:37.290 * Looking for test storage... 00:37:37.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:37.290 21:30:38 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:37.290 21:30:38 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:37:37.290 21:30:38 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:37.290 21:30:38 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:37.290 21:30:38 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:37:37.290 21:30:38 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:37.290 21:30:38 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:37.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.290 --rc genhtml_branch_coverage=1 00:37:37.290 --rc genhtml_function_coverage=1 00:37:37.290 --rc genhtml_legend=1 00:37:37.290 --rc geninfo_all_blocks=1 00:37:37.290 --rc geninfo_unexecuted_blocks=1 00:37:37.290 00:37:37.290 ' 00:37:37.290 21:30:38 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:37.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.290 --rc genhtml_branch_coverage=1 00:37:37.290 --rc genhtml_function_coverage=1 00:37:37.290 --rc genhtml_legend=1 00:37:37.290 --rc geninfo_all_blocks=1 00:37:37.290 --rc geninfo_unexecuted_blocks=1 00:37:37.290 00:37:37.290 ' 00:37:37.290 21:30:38 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:37.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.290 --rc genhtml_branch_coverage=1 00:37:37.290 --rc genhtml_function_coverage=1 00:37:37.290 --rc genhtml_legend=1 00:37:37.291 --rc geninfo_all_blocks=1 00:37:37.291 --rc geninfo_unexecuted_blocks=1 00:37:37.291 00:37:37.291 ' 00:37:37.291 21:30:38 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:37.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.291 --rc genhtml_branch_coverage=1 00:37:37.291 --rc genhtml_function_coverage=1 00:37:37.291 --rc genhtml_legend=1 00:37:37.291 --rc geninfo_all_blocks=1 00:37:37.291 --rc geninfo_unexecuted_blocks=1 00:37:37.291 00:37:37.291 ' 00:37:37.291 21:30:38 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:37.291 21:30:38 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:37.291 21:30:38 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:37.291 21:30:38 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:37.291 21:30:38 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:37.291 21:30:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.291 21:30:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.291 21:30:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.291 21:30:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:37.291 21:30:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:37.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:37.291 21:30:38 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:37.291 21:30:38 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:37.291 21:30:38 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:37.291 21:30:38 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:37.291 21:30:38 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:37.291 21:30:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.291 21:30:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.291 21:30:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.291 21:30:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:37.291 21:30:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:37.291 21:30:38 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:37.291 21:30:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:37.291 21:30:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:37.291 21:30:38 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:37:37.291 21:30:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:45.436 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:45.436 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:45.436 Found net devices under 0000:31:00.0: cvl_0_0 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:45.436 Found net devices under 0000:31:00.1: cvl_0_1 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:45.436 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:45.437 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:45.437 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:45.437 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:45.437 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:45.437 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:45.437 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:45.437 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:45.437 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:45.437 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:45.437 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:45.437 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:45.437 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:45.437 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:45.698 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:45.698 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:45.698 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:45.698 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:45.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:45.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:37:45.698 00:37:45.698 --- 10.0.0.2 ping statistics --- 00:37:45.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.698 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:37:45.698 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:45.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:45.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:37:45.698 00:37:45.698 --- 10.0.0.1 ping statistics --- 00:37:45.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.698 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:37:45.698 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:45.698 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:37:45.698 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:45.698 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:45.698 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:45.698 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:45.698 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:45.698 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:45.698 21:30:46 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:45.698 21:30:46 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:45.698 21:30:46 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:45.698 21:30:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:45.698 21:30:46 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:45.698 21:30:46 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:37:45.698 21:30:46 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:37:45.698 21:30:46 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:37:45.698 21:30:46 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:37:45.698 21:30:46 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:37:45.698 21:30:46 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:37:45.698 21:30:46 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:45.698 21:30:46 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:45.698 21:30:46 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:37:45.698 21:30:47 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:37:45.698 21:30:47 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:37:45.698 21:30:47 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:37:45.698 21:30:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:37:45.698 21:30:47 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:37:45.698 21:30:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:45.698 21:30:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:45.698 21:30:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:46.299 21:30:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:37:46.299 21:30:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:46.299 21:30:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:46.299 21:30:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:46.869 21:30:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:37:46.870 21:30:48 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:46.870 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:46.870 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:46.870 21:30:48 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:46.870 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:46.870 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:46.870 21:30:48 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2416975 00:37:46.870 21:30:48 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:46.870 21:30:48 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:46.870 21:30:48 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2416975 00:37:46.870 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2416975 ']' 00:37:46.870 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:46.870 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:46.870 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:46.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:46.870 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:46.870 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:46.870 [2024-12-05 21:30:48.157203] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:37:46.870 [2024-12-05 21:30:48.157261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:46.870 [2024-12-05 21:30:48.245698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:46.870 [2024-12-05 21:30:48.283111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:46.870 [2024-12-05 21:30:48.283156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:46.870 [2024-12-05 21:30:48.283164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:46.870 [2024-12-05 21:30:48.283170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:46.870 [2024-12-05 21:30:48.283176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:46.870 [2024-12-05 21:30:48.284870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:46.870 [2024-12-05 21:30:48.285007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:46.870 [2024-12-05 21:30:48.285062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:46.870 [2024-12-05 21:30:48.285062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:47.810 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:47.810 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:37:47.810 21:30:48 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:47.810 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.810 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:47.810 INFO: Log level set to 20 00:37:47.810 INFO: Requests: 00:37:47.810 { 00:37:47.810 "jsonrpc": "2.0", 00:37:47.810 "method": "nvmf_set_config", 00:37:47.810 "id": 1, 00:37:47.810 "params": { 00:37:47.810 "admin_cmd_passthru": { 00:37:47.810 "identify_ctrlr": true 00:37:47.810 } 00:37:47.810 } 00:37:47.810 } 00:37:47.810 00:37:47.810 INFO: response: 00:37:47.810 { 00:37:47.810 "jsonrpc": "2.0", 00:37:47.810 "id": 1, 00:37:47.810 "result": true 00:37:47.810 } 00:37:47.810 00:37:47.810 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.810 21:30:48 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:47.810 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.810 21:30:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:47.810 INFO: Setting log level to 20 00:37:47.810 INFO: Setting log level to 20 00:37:47.810 INFO: Log level set to 20 00:37:47.810 INFO: Log level set to 20 00:37:47.810 INFO: Requests: 00:37:47.810 { 00:37:47.810 "jsonrpc": "2.0", 00:37:47.810 "method": "framework_start_init", 00:37:47.810 "id": 1 00:37:47.810 } 00:37:47.810 00:37:47.810 INFO: Requests: 00:37:47.810 { 00:37:47.810 "jsonrpc": "2.0", 00:37:47.810 "method": "framework_start_init", 00:37:47.810 "id": 1 00:37:47.810 } 00:37:47.810 00:37:47.810 [2024-12-05 21:30:49.026327] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:47.810 INFO: response: 00:37:47.810 { 00:37:47.810 "jsonrpc": "2.0", 00:37:47.810 "id": 1, 00:37:47.810 "result": true 00:37:47.810 } 00:37:47.810 00:37:47.810 INFO: response: 00:37:47.810 { 00:37:47.810 "jsonrpc": "2.0", 00:37:47.810 "id": 1, 00:37:47.810 "result": true 00:37:47.810 } 00:37:47.810 00:37:47.810 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.810 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:47.810 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.810 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:47.810 INFO: Setting log level to 40 00:37:47.810 INFO: Setting log level to 40 00:37:47.810 INFO: Setting log level to 40 00:37:47.810 [2024-12-05 21:30:49.039638] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:47.810 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.810 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:47.810 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:47.810 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:47.810 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:37:47.810 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.810 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:48.070 Nvme0n1 00:37:48.070 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.070 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:48.070 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.070 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:48.070 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.070 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:48.070 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.070 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:48.070 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.070 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:48.070 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.070 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:48.070 [2024-12-05 21:30:49.440126] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:48.070 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.070 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:48.070 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.070 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:48.070 [ 00:37:48.070 { 00:37:48.070 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:48.070 "subtype": "Discovery", 00:37:48.070 "listen_addresses": [], 00:37:48.070 "allow_any_host": true, 00:37:48.070 "hosts": [] 00:37:48.070 }, 00:37:48.070 { 00:37:48.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:48.070 "subtype": "NVMe", 00:37:48.070 "listen_addresses": [ 00:37:48.070 { 00:37:48.070 "trtype": "TCP", 00:37:48.070 "adrfam": "IPv4", 00:37:48.070 "traddr": "10.0.0.2", 00:37:48.070 "trsvcid": "4420" 00:37:48.070 } 00:37:48.070 ], 00:37:48.070 "allow_any_host": true, 00:37:48.070 "hosts": [], 00:37:48.070 "serial_number": "SPDK00000000000001", 00:37:48.070 "model_number": "SPDK bdev Controller", 00:37:48.070 "max_namespaces": 1, 00:37:48.070 "min_cntlid": 1, 00:37:48.070 "max_cntlid": 65519, 00:37:48.070 "namespaces": [ 00:37:48.070 { 00:37:48.070 "nsid": 1, 00:37:48.070 "bdev_name": "Nvme0n1", 00:37:48.070 "name": "Nvme0n1", 00:37:48.070 "nguid": "3634473052605494002538450000002D", 00:37:48.070 "uuid": "36344730-5260-5494-0025-38450000002d" 00:37:48.070 } 00:37:48.070 ] 00:37:48.070 } 00:37:48.070 ] 00:37:48.070 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.070 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:48.070 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:48.070 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:48.330 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:37:48.330 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:48.330 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:48.330 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:48.591 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:37:48.591 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:37:48.591 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:37:48.591 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:48.591 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.591 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:48.591 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.591 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:48.591 21:30:49 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:48.591 21:30:49 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:48.591 21:30:49 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:37:48.591 21:30:49 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:48.591 21:30:49 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:37:48.591 21:30:49 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:48.591 21:30:49 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:48.591 rmmod nvme_tcp 00:37:48.591 rmmod nvme_fabrics 00:37:48.591 rmmod nvme_keyring 00:37:48.591 21:30:49 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:48.591 21:30:49 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:37:48.591 21:30:49 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:37:48.591 21:30:49 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2416975 ']' 00:37:48.591 21:30:49 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2416975 00:37:48.591 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2416975 ']' 00:37:48.591 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2416975 00:37:48.591 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:37:48.591 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:48.591 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2416975 00:37:48.591 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:48.591 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:48.591 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2416975' 00:37:48.591 killing process with pid 2416975 00:37:48.591 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2416975 00:37:48.591 21:30:49 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2416975 00:37:48.853 21:30:50 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:48.853 21:30:50 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:48.853 21:30:50 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:48.853 21:30:50 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:37:48.853 21:30:50 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:37:48.853 21:30:50 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:48.853 21:30:50 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:37:48.853 21:30:50 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:48.853 21:30:50 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:48.853 21:30:50 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:48.853 21:30:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:48.853 21:30:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.413 21:30:52 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:51.413 00:37:51.413 real 0m13.847s 00:37:51.413 user 0m10.167s 00:37:51.413 sys 0m7.290s 00:37:51.413 21:30:52 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.413 21:30:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:51.413 ************************************ 00:37:51.413 END TEST nvmf_identify_passthru 00:37:51.413 ************************************ 00:37:51.413 21:30:52 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:51.413 21:30:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:51.413 21:30:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.413 21:30:52 -- common/autotest_common.sh@10 -- # set +x 00:37:51.413 ************************************ 00:37:51.413 START TEST nvmf_dif 00:37:51.413 ************************************ 00:37:51.413 21:30:52 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:51.413 * Looking for test storage... 00:37:51.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:51.413 21:30:52 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:51.413 21:30:52 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:37:51.413 21:30:52 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:51.413 21:30:52 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:37:51.413 21:30:52 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.413 21:30:52 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:51.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.413 --rc genhtml_branch_coverage=1 00:37:51.413 --rc genhtml_function_coverage=1 00:37:51.413 --rc genhtml_legend=1 00:37:51.413 --rc geninfo_all_blocks=1 00:37:51.413 --rc geninfo_unexecuted_blocks=1 00:37:51.413 00:37:51.413 ' 00:37:51.413 21:30:52 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:51.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.413 --rc genhtml_branch_coverage=1 00:37:51.413 --rc genhtml_function_coverage=1 00:37:51.413 --rc genhtml_legend=1 00:37:51.413 --rc geninfo_all_blocks=1 00:37:51.413 --rc geninfo_unexecuted_blocks=1 00:37:51.413 00:37:51.413 ' 00:37:51.413 21:30:52 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:51.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.413 --rc genhtml_branch_coverage=1 00:37:51.413 --rc genhtml_function_coverage=1 00:37:51.413 --rc genhtml_legend=1 00:37:51.413 --rc geninfo_all_blocks=1 00:37:51.413 --rc geninfo_unexecuted_blocks=1 00:37:51.413 00:37:51.413 ' 00:37:51.413 21:30:52 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:51.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.413 --rc genhtml_branch_coverage=1 00:37:51.413 --rc genhtml_function_coverage=1 00:37:51.413 --rc genhtml_legend=1 00:37:51.413 --rc geninfo_all_blocks=1 00:37:51.413 --rc geninfo_unexecuted_blocks=1 00:37:51.413 00:37:51.413 ' 00:37:51.413 21:30:52 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.413 21:30:52 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.413 21:30:52 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.413 21:30:52 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.413 21:30:52 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.413 21:30:52 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:51.413 21:30:52 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:51.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:51.413 21:30:52 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:51.413 21:30:52 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:51.413 21:30:52 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:51.413 21:30:52 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:51.413 21:30:52 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:51.413 21:30:52 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:51.414 21:30:52 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:51.414 21:30:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:51.414 21:30:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.414 21:30:52 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:51.414 21:30:52 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:51.414 21:30:52 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:37:51.414 21:30:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:59.557 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:59.557 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:59.557 Found net devices under 0000:31:00.0: cvl_0_0 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:59.557 Found net devices under 0000:31:00.1: cvl_0_1 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:59.557 21:31:00 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:59.818 21:31:00 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:59.818 21:31:00 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:59.818 21:31:01 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:59.818 21:31:01 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:59.818 21:31:01 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:59.818 21:31:01 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:59.818 21:31:01 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:59.818 21:31:01 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:59.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:59.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:37:59.818 00:37:59.818 --- 10.0.0.2 ping statistics --- 00:37:59.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:59.818 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:37:59.818 21:31:01 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:59.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:59.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:37:59.818 00:37:59.818 --- 10.0.0.1 ping statistics --- 00:37:59.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:59.818 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:37:59.818 21:31:01 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:59.818 21:31:01 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:37:59.818 21:31:01 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:59.818 21:31:01 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:04.025 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:38:04.025 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:04.025 21:31:05 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:04.025 21:31:05 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:04.025 21:31:05 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:04.025 21:31:05 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:04.025 21:31:05 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:04.025 21:31:05 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:04.025 21:31:05 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:04.025 21:31:05 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:04.025 21:31:05 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:04.025 21:31:05 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:04.025 21:31:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:04.025 21:31:05 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2423938 00:38:04.025 21:31:05 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2423938 00:38:04.025 21:31:05 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:04.025 21:31:05 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2423938 ']' 00:38:04.025 21:31:05 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:04.025 21:31:05 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:04.025 21:31:05 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:04.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:04.025 21:31:05 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:04.025 21:31:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:04.285 [2024-12-05 21:31:05.503606] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:38:04.285 [2024-12-05 21:31:05.503691] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:04.285 [2024-12-05 21:31:05.593968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:04.285 [2024-12-05 21:31:05.632408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:04.285 [2024-12-05 21:31:05.632447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:04.285 [2024-12-05 21:31:05.632455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:04.285 [2024-12-05 21:31:05.632462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:04.285 [2024-12-05 21:31:05.632467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:04.285 [2024-12-05 21:31:05.633154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:04.855 21:31:06 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:04.855 21:31:06 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:38:05.118 21:31:06 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:05.118 21:31:06 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:05.118 21:31:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:05.118 21:31:06 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:05.118 21:31:06 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:05.118 21:31:06 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:05.118 21:31:06 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.118 21:31:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:05.118 [2024-12-05 21:31:06.338858] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:05.118 21:31:06 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.118 21:31:06 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:05.118 21:31:06 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:05.118 21:31:06 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:05.118 21:31:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:05.118 ************************************ 00:38:05.118 START TEST fio_dif_1_default 00:38:05.118 ************************************ 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:05.118 bdev_null0 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:05.118 [2024-12-05 21:31:06.427227] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:05.118 { 00:38:05.118 "params": { 00:38:05.118 "name": "Nvme$subsystem", 00:38:05.118 "trtype": "$TEST_TRANSPORT", 00:38:05.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:05.118 "adrfam": "ipv4", 00:38:05.118 "trsvcid": "$NVMF_PORT", 00:38:05.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:05.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:05.118 "hdgst": ${hdgst:-false}, 00:38:05.118 "ddgst": ${ddgst:-false} 00:38:05.118 }, 00:38:05.118 "method": "bdev_nvme_attach_controller" 00:38:05.118 } 00:38:05.118 EOF 00:38:05.118 )") 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:38:05.118 21:31:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:05.119 "params": { 00:38:05.119 "name": "Nvme0", 00:38:05.119 "trtype": "tcp", 00:38:05.119 "traddr": "10.0.0.2", 00:38:05.119 "adrfam": "ipv4", 00:38:05.119 "trsvcid": "4420", 00:38:05.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:05.119 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:05.119 "hdgst": false, 00:38:05.119 "ddgst": false 00:38:05.119 }, 00:38:05.119 "method": "bdev_nvme_attach_controller" 00:38:05.119 }' 00:38:05.119 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:05.119 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:05.119 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:05.119 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:05.119 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:05.119 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:05.119 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:05.119 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:05.119 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:05.119 21:31:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:05.711 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:05.711 fio-3.35 00:38:05.711 Starting 1 thread 00:38:17.936 00:38:17.936 filename0: (groupid=0, jobs=1): err= 0: pid=2424467: Thu Dec 5 21:31:17 2024 00:38:17.936 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10032msec) 00:38:17.936 slat (nsec): min=5501, max=32194, avg=6364.46, stdev=1610.22 00:38:17.936 clat (usec): min=40844, max=44520, avg=41095.33, stdev=397.89 00:38:17.936 lat (usec): min=40853, max=44552, avg=41101.69, stdev=398.28 00:38:17.936 clat percentiles (usec): 00:38:17.936 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:17.936 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:17.936 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:38:17.936 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:38:17.936 | 99.99th=[44303] 00:38:17.936 bw ( KiB/s): min= 384, max= 416, per=99.70%, avg=388.80, stdev=11.72, samples=20 00:38:17.936 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:38:17.936 lat (msec) : 50=100.00% 00:38:17.936 cpu : usr=92.95%, sys=6.84%, ctx=11, majf=0, minf=221 00:38:17.936 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:17.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.936 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:17.936 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:17.936 00:38:17.936 Run status group 0 (all jobs): 00:38:17.936 READ: bw=389KiB/s (398kB/s), 389KiB/s-389KiB/s (398kB/s-398kB/s), io=3904KiB (3998kB), run=10032-10032msec 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.936 00:38:17.936 real 0m11.256s 00:38:17.936 user 0m25.178s 00:38:17.936 sys 0m1.004s 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:17.936 ************************************ 00:38:17.936 END TEST fio_dif_1_default 00:38:17.936 ************************************ 00:38:17.936 21:31:17 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:17.936 21:31:17 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:17.936 21:31:17 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:17.936 21:31:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:17.936 ************************************ 00:38:17.936 START TEST fio_dif_1_multi_subsystems 00:38:17.936 ************************************ 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:17.936 bdev_null0 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:17.936 [2024-12-05 21:31:17.766803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:17.936 bdev_null1 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:17.936 { 00:38:17.936 "params": { 00:38:17.936 "name": "Nvme$subsystem", 00:38:17.936 "trtype": "$TEST_TRANSPORT", 00:38:17.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:17.936 "adrfam": "ipv4", 00:38:17.936 "trsvcid": "$NVMF_PORT", 00:38:17.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:17.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:17.936 "hdgst": ${hdgst:-false}, 00:38:17.936 "ddgst": ${ddgst:-false} 00:38:17.936 }, 00:38:17.936 "method": "bdev_nvme_attach_controller" 00:38:17.936 } 00:38:17.936 EOF 00:38:17.936 )") 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:17.936 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:17.937 { 00:38:17.937 "params": { 00:38:17.937 "name": "Nvme$subsystem", 00:38:17.937 "trtype": "$TEST_TRANSPORT", 00:38:17.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:17.937 "adrfam": "ipv4", 00:38:17.937 "trsvcid": "$NVMF_PORT", 00:38:17.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:17.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:17.937 "hdgst": ${hdgst:-false}, 00:38:17.937 "ddgst": ${ddgst:-false} 00:38:17.937 }, 00:38:17.937 "method": "bdev_nvme_attach_controller" 00:38:17.937 } 00:38:17.937 EOF 00:38:17.937 )") 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:17.937 "params": { 00:38:17.937 "name": "Nvme0", 00:38:17.937 "trtype": "tcp", 00:38:17.937 "traddr": "10.0.0.2", 00:38:17.937 "adrfam": "ipv4", 00:38:17.937 "trsvcid": "4420", 00:38:17.937 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:17.937 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:17.937 "hdgst": false, 00:38:17.937 "ddgst": false 00:38:17.937 }, 00:38:17.937 "method": "bdev_nvme_attach_controller" 00:38:17.937 },{ 00:38:17.937 "params": { 00:38:17.937 "name": "Nvme1", 00:38:17.937 "trtype": "tcp", 00:38:17.937 "traddr": "10.0.0.2", 00:38:17.937 "adrfam": "ipv4", 00:38:17.937 "trsvcid": "4420", 00:38:17.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:17.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:17.937 "hdgst": false, 00:38:17.937 "ddgst": false 00:38:17.937 }, 00:38:17.937 "method": "bdev_nvme_attach_controller" 00:38:17.937 }' 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:17.937 21:31:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:17.937 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:17.937 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:17.937 fio-3.35 00:38:17.937 Starting 2 threads 00:38:27.927 00:38:27.927 filename0: (groupid=0, jobs=1): err= 0: pid=2426690: Thu Dec 5 21:31:29 2024 00:38:27.927 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10016msec) 00:38:27.927 slat (nsec): min=5518, max=29432, avg=6356.28, stdev=1491.69 00:38:27.927 clat (usec): min=40822, max=42550, avg=41029.90, stdev=217.69 00:38:27.927 lat (usec): min=40827, max=42579, avg=41036.26, stdev=217.85 00:38:27.927 clat percentiles (usec): 00:38:27.927 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:27.927 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:27.927 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:27.927 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:38:27.927 | 99.99th=[42730] 00:38:27.927 bw ( KiB/s): min= 384, max= 416, per=33.90%, avg=388.80, stdev=11.72, samples=20 00:38:27.927 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:38:27.927 lat (msec) : 50=100.00% 00:38:27.927 cpu : usr=94.94%, sys=4.86%, ctx=8, majf=0, minf=61 00:38:27.927 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:27.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:27.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:27.927 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:27.927 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:27.927 filename1: (groupid=0, jobs=1): err= 0: pid=2426691: Thu Dec 5 21:31:29 2024 00:38:27.927 read: IOPS=188, BW=755KiB/s (773kB/s)(7568KiB/10024msec) 00:38:27.927 slat (nsec): min=5521, max=26162, avg=6527.13, stdev=1454.06 00:38:27.927 clat (usec): min=620, max=43010, avg=21174.12, stdev=20205.55 00:38:27.927 lat (usec): min=625, max=43016, avg=21180.65, stdev=20205.55 00:38:27.927 clat percentiles (usec): 00:38:27.927 | 1.00th=[ 734], 5.00th=[ 898], 10.00th=[ 914], 20.00th=[ 930], 00:38:27.927 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[41157], 60.00th=[41157], 00:38:27.927 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:38:27.927 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:38:27.927 | 99.99th=[43254] 00:38:27.927 bw ( KiB/s): min= 704, max= 768, per=65.97%, avg=755.20, stdev=26.27, samples=20 00:38:27.927 iops : min= 176, max= 192, avg=188.80, stdev= 6.57, samples=20 00:38:27.927 lat (usec) : 750=1.32%, 1000=47.41% 00:38:27.927 lat (msec) : 2=1.16%, 50=50.11% 00:38:27.927 cpu : usr=95.55%, sys=4.24%, ctx=12, majf=0, minf=190 00:38:27.927 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:27.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:27.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:27.927 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:27.927 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:27.927 00:38:27.927 Run status group 0 (all jobs): 00:38:27.927 READ: bw=1144KiB/s (1172kB/s), 390KiB/s-755KiB/s (399kB/s-773kB/s), io=11.2MiB (11.7MB), run=10016-10024msec 00:38:27.927 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:27.927 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:27.927 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:27.927 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:27.927 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:27.927 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:27.927 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.928 00:38:27.928 real 0m11.480s 00:38:27.928 user 0m31.492s 00:38:27.928 sys 0m1.251s 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:27.928 21:31:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:27.928 ************************************ 00:38:27.928 END TEST fio_dif_1_multi_subsystems 00:38:27.928 ************************************ 00:38:27.928 21:31:29 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:27.928 21:31:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:27.928 21:31:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:27.928 21:31:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:27.928 ************************************ 00:38:27.928 START TEST fio_dif_rand_params 00:38:27.928 ************************************ 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:27.928 bdev_null0 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:27.928 [2024-12-05 21:31:29.329197] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:27.928 { 00:38:27.928 "params": { 00:38:27.928 "name": "Nvme$subsystem", 00:38:27.928 "trtype": "$TEST_TRANSPORT", 00:38:27.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:27.928 "adrfam": "ipv4", 00:38:27.928 "trsvcid": "$NVMF_PORT", 00:38:27.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:27.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:27.928 "hdgst": ${hdgst:-false}, 00:38:27.928 "ddgst": ${ddgst:-false} 00:38:27.928 }, 00:38:27.928 "method": "bdev_nvme_attach_controller" 00:38:27.928 } 00:38:27.928 EOF 00:38:27.928 )") 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:27.928 21:31:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:27.928 "params": { 00:38:27.928 "name": "Nvme0", 00:38:27.928 "trtype": "tcp", 00:38:27.928 "traddr": "10.0.0.2", 00:38:27.928 "adrfam": "ipv4", 00:38:27.928 "trsvcid": "4420", 00:38:27.928 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:27.928 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:27.928 "hdgst": false, 00:38:27.928 "ddgst": false 00:38:27.928 }, 00:38:27.928 "method": "bdev_nvme_attach_controller" 00:38:27.928 }' 00:38:28.188 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:28.188 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:28.188 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:28.188 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:28.188 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:28.188 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:28.188 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:28.188 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:28.188 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:28.188 21:31:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:28.447 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:28.447 ... 00:38:28.447 fio-3.35 00:38:28.447 Starting 3 threads 00:38:35.027 00:38:35.027 filename0: (groupid=0, jobs=1): err= 0: pid=2429136: Thu Dec 5 21:31:35 2024 00:38:35.027 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(151MiB/5004msec) 00:38:35.027 slat (nsec): min=5567, max=49338, avg=8909.88, stdev=2396.29 00:38:35.027 clat (usec): min=5253, max=92231, avg=12393.11, stdev=10988.71 00:38:35.027 lat (usec): min=5262, max=92242, avg=12402.02, stdev=10988.69 00:38:35.027 clat percentiles (usec): 00:38:35.027 | 1.00th=[ 5669], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7701], 00:38:35.027 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[10290], 00:38:35.027 | 70.00th=[10945], 80.00th=[11731], 90.00th=[13304], 95.00th=[48497], 00:38:35.027 | 99.00th=[52167], 99.50th=[53216], 99.90th=[91751], 99.95th=[91751], 00:38:35.027 | 99.99th=[91751] 00:38:35.027 bw ( KiB/s): min=22272, max=38144, per=37.48%, avg=31146.67, stdev=5204.11, samples=9 00:38:35.027 iops : min= 174, max= 298, avg=243.33, stdev=40.66, samples=9 00:38:35.027 lat (msec) : 10=55.12%, 20=37.93%, 50=3.55%, 100=3.39% 00:38:35.027 cpu : usr=94.90%, sys=4.82%, ctx=18, majf=0, minf=102 00:38:35.027 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:35.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.027 issued rwts: total=1210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.027 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:35.027 filename0: (groupid=0, jobs=1): err= 0: pid=2429137: Thu Dec 5 21:31:35 2024 00:38:35.027 read: IOPS=148, BW=18.6MiB/s (19.5MB/s)(93.1MiB/5010msec) 00:38:35.027 slat (nsec): min=5785, max=61595, avg=9157.43, stdev=2751.08 00:38:35.027 clat (usec): min=5969, max=93575, avg=20157.27, stdev=18646.61 00:38:35.027 lat (usec): min=5978, max=93585, avg=20166.42, stdev=18646.73 00:38:35.027 clat percentiles (usec): 00:38:35.027 | 1.00th=[ 6521], 5.00th=[ 7963], 10.00th=[ 8848], 20.00th=[10421], 00:38:35.027 | 30.00th=[11207], 40.00th=[11994], 50.00th=[12518], 60.00th=[13173], 00:38:35.027 | 70.00th=[14353], 80.00th=[16581], 90.00th=[52691], 95.00th=[54264], 00:38:35.027 | 99.00th=[91751], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:38:35.027 | 99.99th=[93848] 00:38:35.027 bw ( KiB/s): min=13312, max=25600, per=22.86%, avg=18995.20, stdev=4464.82, samples=10 00:38:35.027 iops : min= 104, max= 200, avg=148.40, stdev=34.88, samples=10 00:38:35.027 lat (msec) : 10=17.18%, 20=64.70%, 50=1.74%, 100=16.38% 00:38:35.027 cpu : usr=96.61%, sys=3.09%, ctx=11, majf=0, minf=160 00:38:35.027 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:35.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.027 issued rwts: total=745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.027 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:35.027 filename0: (groupid=0, jobs=1): err= 0: pid=2429138: Thu Dec 5 21:31:35 2024 00:38:35.027 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(165MiB/5045msec) 00:38:35.027 slat (nsec): min=5771, max=53476, avg=9169.25, stdev=2630.81 00:38:35.027 clat (usec): min=4624, max=54183, avg=11423.02, stdev=7293.30 00:38:35.027 lat (usec): min=4633, max=54192, avg=11432.19, stdev=7293.22 00:38:35.027 clat percentiles (usec): 00:38:35.027 | 1.00th=[ 5145], 5.00th=[ 5735], 10.00th=[ 6456], 20.00th=[ 7635], 00:38:35.027 | 30.00th=[ 8455], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11076], 00:38:35.027 | 70.00th=[11994], 80.00th=[13173], 90.00th=[14746], 95.00th=[15926], 00:38:35.027 | 99.00th=[49546], 99.50th=[51119], 99.90th=[54264], 99.95th=[54264], 00:38:35.027 | 99.99th=[54264] 00:38:35.027 bw ( KiB/s): min=30208, max=40704, per=40.61%, avg=33740.80, stdev=3257.90, samples=10 00:38:35.027 iops : min= 236, max= 318, avg=263.60, stdev=25.45, samples=10 00:38:35.027 lat (msec) : 10=47.42%, 20=49.47%, 50=2.27%, 100=0.83% 00:38:35.027 cpu : usr=96.02%, sys=3.73%, ctx=9, majf=0, minf=74 00:38:35.027 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:35.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.027 issued rwts: total=1320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.027 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:35.027 00:38:35.027 Run status group 0 (all jobs): 00:38:35.027 READ: bw=81.1MiB/s (85.1MB/s), 18.6MiB/s-32.7MiB/s (19.5MB/s-34.3MB/s), io=409MiB (429MB), run=5004-5045msec 00:38:35.027 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:35.027 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:35.027 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:35.027 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:35.027 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:35.027 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:35.027 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.027 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.027 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.027 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:35.027 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.027 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.028 bdev_null0 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.028 [2024-12-05 21:31:35.582534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.028 bdev_null1 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.028 bdev_null2 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:35.028 { 00:38:35.028 "params": { 00:38:35.028 "name": "Nvme$subsystem", 00:38:35.028 "trtype": "$TEST_TRANSPORT", 00:38:35.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.028 "adrfam": "ipv4", 00:38:35.028 "trsvcid": "$NVMF_PORT", 00:38:35.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.028 "hdgst": ${hdgst:-false}, 00:38:35.028 "ddgst": ${ddgst:-false} 00:38:35.028 }, 00:38:35.028 "method": "bdev_nvme_attach_controller" 00:38:35.028 } 00:38:35.028 EOF 00:38:35.028 )") 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:35.028 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:35.029 { 00:38:35.029 "params": { 00:38:35.029 "name": "Nvme$subsystem", 00:38:35.029 "trtype": "$TEST_TRANSPORT", 00:38:35.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.029 "adrfam": "ipv4", 00:38:35.029 "trsvcid": "$NVMF_PORT", 00:38:35.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.029 "hdgst": ${hdgst:-false}, 00:38:35.029 "ddgst": ${ddgst:-false} 00:38:35.029 }, 00:38:35.029 "method": "bdev_nvme_attach_controller" 00:38:35.029 } 00:38:35.029 EOF 00:38:35.029 )") 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:35.029 { 00:38:35.029 "params": { 00:38:35.029 "name": "Nvme$subsystem", 00:38:35.029 "trtype": "$TEST_TRANSPORT", 00:38:35.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.029 "adrfam": "ipv4", 00:38:35.029 "trsvcid": "$NVMF_PORT", 00:38:35.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.029 "hdgst": ${hdgst:-false}, 00:38:35.029 "ddgst": ${ddgst:-false} 00:38:35.029 }, 00:38:35.029 "method": "bdev_nvme_attach_controller" 00:38:35.029 } 00:38:35.029 EOF 00:38:35.029 )") 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:35.029 "params": { 00:38:35.029 "name": "Nvme0", 00:38:35.029 "trtype": "tcp", 00:38:35.029 "traddr": "10.0.0.2", 00:38:35.029 "adrfam": "ipv4", 00:38:35.029 "trsvcid": "4420", 00:38:35.029 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:35.029 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:35.029 "hdgst": false, 00:38:35.029 "ddgst": false 00:38:35.029 }, 00:38:35.029 "method": "bdev_nvme_attach_controller" 00:38:35.029 },{ 00:38:35.029 "params": { 00:38:35.029 "name": "Nvme1", 00:38:35.029 "trtype": "tcp", 00:38:35.029 "traddr": "10.0.0.2", 00:38:35.029 "adrfam": "ipv4", 00:38:35.029 "trsvcid": "4420", 00:38:35.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:35.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:35.029 "hdgst": false, 00:38:35.029 "ddgst": false 00:38:35.029 }, 00:38:35.029 "method": "bdev_nvme_attach_controller" 00:38:35.029 },{ 00:38:35.029 "params": { 00:38:35.029 "name": "Nvme2", 00:38:35.029 "trtype": "tcp", 00:38:35.029 "traddr": "10.0.0.2", 00:38:35.029 "adrfam": "ipv4", 00:38:35.029 "trsvcid": "4420", 00:38:35.029 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:35.029 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:35.029 "hdgst": false, 00:38:35.029 "ddgst": false 00:38:35.029 }, 00:38:35.029 "method": "bdev_nvme_attach_controller" 00:38:35.029 }' 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:35.029 21:31:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:35.029 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:35.029 ... 00:38:35.029 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:35.029 ... 00:38:35.029 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:35.029 ... 00:38:35.029 fio-3.35 00:38:35.029 Starting 24 threads 00:38:47.264 00:38:47.264 filename0: (groupid=0, jobs=1): err= 0: pid=2430397: Thu Dec 5 21:31:46 2024 00:38:47.264 read: IOPS=491, BW=1967KiB/s (2014kB/s)(19.2MiB/10016msec) 00:38:47.264 slat (nsec): min=5686, max=83913, avg=14866.49, stdev=14801.34 00:38:47.264 clat (usec): min=6106, max=35759, avg=32419.06, stdev=2657.47 00:38:47.264 lat (usec): min=6120, max=35766, avg=32433.93, stdev=2656.46 00:38:47.264 clat percentiles (usec): 00:38:47.264 | 1.00th=[15270], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:38:47.264 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:38:47.264 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33424], 00:38:47.264 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:38:47.264 | 99.99th=[35914] 00:38:47.264 bw ( KiB/s): min= 1920, max= 2280, per=4.19%, avg=1963.60, stdev=90.94, samples=20 00:38:47.264 iops : min= 480, max= 570, avg=490.90, stdev=22.73, samples=20 00:38:47.264 lat (msec) : 10=0.39%, 20=1.32%, 50=98.29% 00:38:47.264 cpu : usr=98.92%, sys=0.81%, ctx=13, majf=0, minf=30 00:38:47.264 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:47.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.264 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.264 issued rwts: total=4925,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.264 filename0: (groupid=0, jobs=1): err= 0: pid=2430398: Thu Dec 5 21:31:46 2024 00:38:47.264 read: IOPS=510, BW=2041KiB/s (2090kB/s)(19.9MiB/10005msec) 00:38:47.264 slat (nsec): min=5686, max=76091, avg=16892.24, stdev=12369.62 00:38:47.264 clat (usec): min=1556, max=35359, avg=31220.90, stdev=6058.53 00:38:47.264 lat (usec): min=1575, max=35398, avg=31237.80, stdev=6059.30 00:38:47.264 clat percentiles (usec): 00:38:47.264 | 1.00th=[ 1729], 5.00th=[19268], 10.00th=[32113], 20.00th=[32375], 00:38:47.264 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.264 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.264 | 99.00th=[33817], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:38:47.264 | 99.99th=[35390] 00:38:47.264 bw ( KiB/s): min= 1920, max= 3584, per=4.35%, avg=2041.26, stdev=377.96, samples=19 00:38:47.264 iops : min= 480, max= 896, avg=510.32, stdev=94.49, samples=19 00:38:47.264 lat (msec) : 2=2.51%, 4=0.63%, 10=0.27%, 20=1.92%, 50=94.67% 00:38:47.264 cpu : usr=98.70%, sys=0.96%, ctx=70, majf=0, minf=40 00:38:47.264 IO depths : 1=6.0%, 2=12.1%, 4=24.4%, 8=51.0%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:47.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.264 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.264 issued rwts: total=5104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.264 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.264 filename0: (groupid=0, jobs=1): err= 0: pid=2430399: Thu Dec 5 21:31:46 2024 00:38:47.264 read: IOPS=491, BW=1966KiB/s (2013kB/s)(19.2MiB/10027msec) 00:38:47.264 slat (nsec): min=5768, max=85001, avg=21674.12, stdev=14085.10 00:38:47.264 clat (usec): min=5189, max=42042, avg=32338.67, stdev=2726.67 00:38:47.264 lat (usec): min=5202, max=42049, avg=32360.34, stdev=2725.68 00:38:47.264 clat percentiles (usec): 00:38:47.264 | 1.00th=[13304], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:38:47.265 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.265 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33424], 00:38:47.265 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:38:47.265 | 99.99th=[42206] 00:38:47.265 bw ( KiB/s): min= 1920, max= 2304, per=4.19%, avg=1964.80, stdev=95.38, samples=20 00:38:47.265 iops : min= 480, max= 576, avg=491.20, stdev=23.85, samples=20 00:38:47.265 lat (msec) : 10=0.32%, 20=1.56%, 50=98.11% 00:38:47.265 cpu : usr=98.89%, sys=0.84%, ctx=12, majf=0, minf=21 00:38:47.265 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:47.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.265 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.265 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.265 filename0: (groupid=0, jobs=1): err= 0: pid=2430400: Thu Dec 5 21:31:46 2024 00:38:47.265 read: IOPS=485, BW=1941KiB/s (1987kB/s)(19.0MiB/10005msec) 00:38:47.265 slat (nsec): min=5781, max=83392, avg=20267.37, stdev=13016.19 00:38:47.265 clat (usec): min=15986, max=78944, avg=32792.14, stdev=3910.93 00:38:47.265 lat (usec): min=16001, max=78964, avg=32812.41, stdev=3911.65 00:38:47.265 clat percentiles (usec): 00:38:47.265 | 1.00th=[16909], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:38:47.265 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.265 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:38:47.265 | 99.00th=[48497], 99.50th=[49021], 99.90th=[79168], 99.95th=[79168], 00:38:47.265 | 99.99th=[79168] 00:38:47.265 bw ( KiB/s): min= 1664, max= 2048, per=4.13%, avg=1936.16, stdev=86.63, samples=19 00:38:47.265 iops : min= 416, max= 512, avg=484.00, stdev=21.66, samples=19 00:38:47.265 lat (msec) : 20=1.55%, 50=98.06%, 100=0.39% 00:38:47.265 cpu : usr=98.31%, sys=1.17%, ctx=159, majf=0, minf=29 00:38:47.265 IO depths : 1=5.2%, 2=10.6%, 4=23.6%, 8=53.0%, 16=7.6%, 32=0.0%, >=64=0.0% 00:38:47.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.265 complete : 0=0.0%, 4=93.9%, 8=0.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.265 issued rwts: total=4854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.265 filename0: (groupid=0, jobs=1): err= 0: pid=2430401: Thu Dec 5 21:31:46 2024 00:38:47.265 read: IOPS=487, BW=1950KiB/s (1996kB/s)(19.1MiB/10012msec) 00:38:47.265 slat (usec): min=5, max=100, avg=14.75, stdev=10.79 00:38:47.265 clat (usec): min=19151, max=41846, avg=32697.94, stdev=1133.11 00:38:47.265 lat (usec): min=19157, max=41861, avg=32712.69, stdev=1132.84 00:38:47.265 clat percentiles (usec): 00:38:47.265 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:38:47.265 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:38:47.265 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.265 | 99.00th=[33817], 99.50th=[35390], 99.90th=[41681], 99.95th=[41681], 00:38:47.265 | 99.99th=[41681] 00:38:47.265 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1946.74, stdev=68.61, samples=19 00:38:47.265 iops : min= 448, max= 512, avg=486.68, stdev=17.15, samples=19 00:38:47.265 lat (msec) : 20=0.33%, 50=99.67% 00:38:47.265 cpu : usr=98.23%, sys=1.32%, ctx=36, majf=0, minf=35 00:38:47.265 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:47.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.265 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.265 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.265 filename0: (groupid=0, jobs=1): err= 0: pid=2430402: Thu Dec 5 21:31:46 2024 00:38:47.265 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10023msec) 00:38:47.265 slat (nsec): min=5804, max=94465, avg=14602.99, stdev=9099.62 00:38:47.265 clat (usec): min=19067, max=35620, avg=32623.20, stdev=1236.12 00:38:47.265 lat (usec): min=19073, max=35632, avg=32637.80, stdev=1236.35 00:38:47.265 clat percentiles (usec): 00:38:47.265 | 1.00th=[29492], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:38:47.265 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.265 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.265 | 99.00th=[33817], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:38:47.265 | 99.99th=[35390] 00:38:47.265 bw ( KiB/s): min= 1900, max= 2048, per=4.16%, avg=1951.00, stdev=57.63, samples=20 00:38:47.265 iops : min= 475, max= 512, avg=487.75, stdev=14.41, samples=20 00:38:47.265 lat (msec) : 20=0.33%, 50=99.67% 00:38:47.265 cpu : usr=98.16%, sys=1.18%, ctx=281, majf=0, minf=25 00:38:47.265 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:47.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.265 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.265 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.265 filename0: (groupid=0, jobs=1): err= 0: pid=2430403: Thu Dec 5 21:31:46 2024 00:38:47.265 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10005msec) 00:38:47.265 slat (nsec): min=5678, max=95049, avg=19438.64, stdev=17234.53 00:38:47.265 clat (usec): min=21473, max=59701, avg=32737.24, stdev=1754.23 00:38:47.265 lat (usec): min=21479, max=59721, avg=32756.68, stdev=1753.19 00:38:47.265 clat percentiles (usec): 00:38:47.265 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:38:47.265 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.265 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.265 | 99.00th=[33817], 99.50th=[34341], 99.90th=[59507], 99.95th=[59507], 00:38:47.265 | 99.99th=[59507] 00:38:47.265 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=1940.16, stdev=63.88, samples=19 00:38:47.265 iops : min= 448, max= 512, avg=485.00, stdev=16.07, samples=19 00:38:47.265 lat (msec) : 50=99.67%, 100=0.33% 00:38:47.265 cpu : usr=98.98%, sys=0.69%, ctx=118, majf=0, minf=34 00:38:47.265 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:47.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.265 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.265 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.265 filename0: (groupid=0, jobs=1): err= 0: pid=2430404: Thu Dec 5 21:31:46 2024 00:38:47.265 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10005msec) 00:38:47.265 slat (nsec): min=5699, max=63156, avg=17546.79, stdev=10360.85 00:38:47.265 clat (usec): min=21736, max=59266, avg=32762.80, stdev=1699.96 00:38:47.265 lat (usec): min=21743, max=59284, avg=32780.35, stdev=1699.36 00:38:47.265 clat percentiles (usec): 00:38:47.265 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:38:47.265 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.265 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.265 | 99.00th=[33817], 99.50th=[34341], 99.90th=[58983], 99.95th=[59507], 00:38:47.265 | 99.99th=[59507] 00:38:47.265 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=1940.16, stdev=63.88, samples=19 00:38:47.265 iops : min= 448, max= 512, avg=485.00, stdev=16.07, samples=19 00:38:47.265 lat (msec) : 50=99.67%, 100=0.33% 00:38:47.265 cpu : usr=98.43%, sys=1.06%, ctx=147, majf=0, minf=48 00:38:47.265 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:47.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.265 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.265 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.265 filename1: (groupid=0, jobs=1): err= 0: pid=2430405: Thu Dec 5 21:31:46 2024 00:38:47.265 read: IOPS=483, BW=1936KiB/s (1982kB/s)(18.9MiB/10005msec) 00:38:47.265 slat (nsec): min=5674, max=62412, avg=16314.82, stdev=10182.59 00:38:47.265 clat (usec): min=15835, max=79035, avg=32907.48, stdev=3838.77 00:38:47.265 lat (usec): min=15841, max=79054, avg=32923.79, stdev=3838.84 00:38:47.265 clat percentiles (usec): 00:38:47.265 | 1.00th=[23200], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:38:47.265 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.265 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:38:47.265 | 99.00th=[46924], 99.50th=[55837], 99.90th=[79168], 99.95th=[79168], 00:38:47.265 | 99.99th=[79168] 00:38:47.265 bw ( KiB/s): min= 1712, max= 2048, per=4.12%, avg=1931.11, stdev=77.83, samples=19 00:38:47.265 iops : min= 428, max= 512, avg=482.74, stdev=19.46, samples=19 00:38:47.265 lat (msec) : 20=0.74%, 50=98.43%, 100=0.83% 00:38:47.265 cpu : usr=98.79%, sys=0.94%, ctx=14, majf=0, minf=33 00:38:47.265 IO depths : 1=5.0%, 2=10.4%, 4=21.7%, 8=54.9%, 16=8.0%, 32=0.0%, >=64=0.0% 00:38:47.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.265 complete : 0=0.0%, 4=93.3%, 8=1.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.265 issued rwts: total=4842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.265 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.265 filename1: (groupid=0, jobs=1): err= 0: pid=2430406: Thu Dec 5 21:31:46 2024 00:38:47.265 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10004msec) 00:38:47.265 slat (nsec): min=5703, max=97155, avg=23878.19, stdev=14464.67 00:38:47.265 clat (usec): min=21098, max=58375, avg=32670.28, stdev=1669.68 00:38:47.265 lat (usec): min=21104, max=58394, avg=32694.16, stdev=1669.29 00:38:47.265 clat percentiles (usec): 00:38:47.266 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:38:47.266 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.266 | 70.00th=[32637], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.266 | 99.00th=[33817], 99.50th=[34341], 99.90th=[58459], 99.95th=[58459], 00:38:47.266 | 99.99th=[58459] 00:38:47.266 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1940.37, stdev=64.14, samples=19 00:38:47.266 iops : min= 448, max= 512, avg=485.05, stdev=16.05, samples=19 00:38:47.266 lat (msec) : 50=99.67%, 100=0.33% 00:38:47.266 cpu : usr=98.50%, sys=1.03%, ctx=125, majf=0, minf=27 00:38:47.266 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:47.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.266 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.266 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.266 filename1: (groupid=0, jobs=1): err= 0: pid=2430407: Thu Dec 5 21:31:46 2024 00:38:47.266 read: IOPS=491, BW=1968KiB/s (2015kB/s)(19.2MiB/10017msec) 00:38:47.266 slat (nsec): min=5772, max=74667, avg=21517.95, stdev=12180.34 00:38:47.266 clat (usec): min=9085, max=35741, avg=32324.77, stdev=2685.45 00:38:47.266 lat (usec): min=9115, max=35748, avg=32346.29, stdev=2685.18 00:38:47.266 clat percentiles (usec): 00:38:47.266 | 1.00th=[14484], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:38:47.266 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.266 | 70.00th=[32637], 80.00th=[32900], 90.00th=[32900], 95.00th=[33424], 00:38:47.266 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:38:47.266 | 99.99th=[35914] 00:38:47.266 bw ( KiB/s): min= 1920, max= 2304, per=4.19%, avg=1964.80, stdev=95.38, samples=20 00:38:47.266 iops : min= 480, max= 576, avg=491.20, stdev=23.85, samples=20 00:38:47.266 lat (msec) : 10=0.32%, 20=1.58%, 50=98.09% 00:38:47.266 cpu : usr=98.78%, sys=0.94%, ctx=14, majf=0, minf=35 00:38:47.266 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:47.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.266 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.266 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.266 filename1: (groupid=0, jobs=1): err= 0: pid=2430408: Thu Dec 5 21:31:46 2024 00:38:47.266 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10005msec) 00:38:47.266 slat (nsec): min=5769, max=83783, avg=21764.35, stdev=12665.07 00:38:47.266 clat (usec): min=21057, max=59082, avg=32705.42, stdev=1694.57 00:38:47.266 lat (usec): min=21063, max=59099, avg=32727.18, stdev=1694.02 00:38:47.266 clat percentiles (usec): 00:38:47.266 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:38:47.266 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.266 | 70.00th=[32637], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.266 | 99.00th=[33817], 99.50th=[34341], 99.90th=[58983], 99.95th=[58983], 00:38:47.266 | 99.99th=[58983] 00:38:47.266 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1940.37, stdev=64.14, samples=19 00:38:47.266 iops : min= 448, max= 512, avg=485.05, stdev=16.05, samples=19 00:38:47.266 lat (msec) : 50=99.67%, 100=0.33% 00:38:47.266 cpu : usr=98.82%, sys=0.92%, ctx=14, majf=0, minf=34 00:38:47.266 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:47.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.266 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.266 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.266 filename1: (groupid=0, jobs=1): err= 0: pid=2430409: Thu Dec 5 21:31:46 2024 00:38:47.266 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10023msec) 00:38:47.266 slat (nsec): min=5768, max=52144, avg=11722.58, stdev=6934.10 00:38:47.266 clat (usec): min=17816, max=47469, avg=32653.02, stdev=1321.90 00:38:47.266 lat (usec): min=17822, max=47477, avg=32664.74, stdev=1321.82 00:38:47.266 clat percentiles (usec): 00:38:47.266 | 1.00th=[23200], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:38:47.266 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:38:47.266 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.266 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:38:47.266 | 99.99th=[47449] 00:38:47.266 bw ( KiB/s): min= 1900, max= 2048, per=4.16%, avg=1951.00, stdev=57.63, samples=20 00:38:47.266 iops : min= 475, max= 512, avg=487.75, stdev=14.41, samples=20 00:38:47.266 lat (msec) : 20=0.39%, 50=99.61% 00:38:47.266 cpu : usr=98.85%, sys=0.87%, ctx=20, majf=0, minf=47 00:38:47.266 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:47.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.266 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.266 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.266 filename1: (groupid=0, jobs=1): err= 0: pid=2430410: Thu Dec 5 21:31:46 2024 00:38:47.266 read: IOPS=491, BW=1966KiB/s (2013kB/s)(19.2MiB/10027msec) 00:38:47.266 slat (nsec): min=5681, max=75837, avg=9563.64, stdev=7650.81 00:38:47.266 clat (usec): min=10291, max=34259, avg=32477.40, stdev=2255.83 00:38:47.266 lat (usec): min=10306, max=34265, avg=32486.96, stdev=2254.97 00:38:47.266 clat percentiles (usec): 00:38:47.266 | 1.00th=[19792], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:38:47.266 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:38:47.266 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.266 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:38:47.266 | 99.99th=[34341] 00:38:47.266 bw ( KiB/s): min= 1920, max= 2176, per=4.19%, avg=1964.80, stdev=75.15, samples=20 00:38:47.266 iops : min= 480, max= 544, avg=491.20, stdev=18.79, samples=20 00:38:47.266 lat (msec) : 20=1.22%, 50=98.78% 00:38:47.266 cpu : usr=98.90%, sys=0.77%, ctx=35, majf=0, minf=51 00:38:47.266 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:47.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.266 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.266 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.266 filename1: (groupid=0, jobs=1): err= 0: pid=2430411: Thu Dec 5 21:31:46 2024 00:38:47.266 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.4MiB/10016msec) 00:38:47.266 slat (usec): min=5, max=100, avg=15.84, stdev=16.00 00:38:47.266 clat (usec): min=15343, max=50318, avg=32205.99, stdev=3163.81 00:38:47.266 lat (usec): min=15349, max=50324, avg=32221.83, stdev=3164.31 00:38:47.266 clat percentiles (usec): 00:38:47.266 | 1.00th=[21890], 5.00th=[23200], 10.00th=[32113], 20.00th=[32637], 00:38:47.266 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:38:47.266 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.266 | 99.00th=[40633], 99.50th=[43779], 99.90th=[50070], 99.95th=[50070], 00:38:47.266 | 99.99th=[50070] 00:38:47.266 bw ( KiB/s): min= 1916, max= 2160, per=4.18%, avg=1961.89, stdev=71.53, samples=19 00:38:47.266 iops : min= 479, max= 540, avg=490.47, stdev=17.88, samples=19 00:38:47.266 lat (msec) : 20=0.56%, 50=99.31%, 100=0.12% 00:38:47.266 cpu : usr=99.14%, sys=0.59%, ctx=17, majf=0, minf=31 00:38:47.266 IO depths : 1=5.1%, 2=10.5%, 4=22.3%, 8=54.5%, 16=7.6%, 32=0.0%, >=64=0.0% 00:38:47.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.266 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.266 issued rwts: total=4956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.266 filename1: (groupid=0, jobs=1): err= 0: pid=2430412: Thu Dec 5 21:31:46 2024 00:38:47.266 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10005msec) 00:38:47.266 slat (nsec): min=5675, max=92254, avg=19947.89, stdev=16523.06 00:38:47.266 clat (usec): min=21349, max=62587, avg=32731.82, stdev=1698.00 00:38:47.266 lat (usec): min=21359, max=62609, avg=32751.77, stdev=1696.99 00:38:47.266 clat percentiles (usec): 00:38:47.266 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:38:47.266 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.266 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.266 | 99.00th=[33817], 99.50th=[34341], 99.90th=[58459], 99.95th=[58459], 00:38:47.266 | 99.99th=[62653] 00:38:47.266 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1940.37, stdev=64.14, samples=19 00:38:47.266 iops : min= 448, max= 512, avg=485.05, stdev=16.05, samples=19 00:38:47.266 lat (msec) : 50=99.67%, 100=0.33% 00:38:47.266 cpu : usr=99.00%, sys=0.72%, ctx=60, majf=0, minf=26 00:38:47.266 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:47.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.266 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.266 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.266 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.266 filename2: (groupid=0, jobs=1): err= 0: pid=2430413: Thu Dec 5 21:31:46 2024 00:38:47.266 read: IOPS=491, BW=1968KiB/s (2015kB/s)(19.2MiB/10017msec) 00:38:47.266 slat (nsec): min=5681, max=80490, avg=22244.46, stdev=15210.45 00:38:47.266 clat (usec): min=9101, max=35774, avg=32327.05, stdev=2710.48 00:38:47.266 lat (usec): min=9117, max=35780, avg=32349.29, stdev=2709.75 00:38:47.266 clat percentiles (usec): 00:38:47.266 | 1.00th=[14353], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:38:47.266 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.266 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33424], 00:38:47.266 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:38:47.266 | 99.99th=[35914] 00:38:47.266 bw ( KiB/s): min= 1920, max= 2304, per=4.19%, avg=1964.80, stdev=95.38, samples=20 00:38:47.266 iops : min= 480, max= 576, avg=491.20, stdev=23.85, samples=20 00:38:47.266 lat (msec) : 10=0.32%, 20=1.62%, 50=98.05% 00:38:47.266 cpu : usr=98.91%, sys=0.75%, ctx=68, majf=0, minf=46 00:38:47.267 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:47.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.267 filename2: (groupid=0, jobs=1): err= 0: pid=2430414: Thu Dec 5 21:31:46 2024 00:38:47.267 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10005msec) 00:38:47.267 slat (nsec): min=5932, max=97063, avg=23475.68, stdev=14461.50 00:38:47.267 clat (usec): min=21436, max=59610, avg=32693.08, stdev=1725.32 00:38:47.267 lat (usec): min=21443, max=59632, avg=32716.55, stdev=1724.58 00:38:47.267 clat percentiles (usec): 00:38:47.267 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:38:47.267 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.267 | 70.00th=[32637], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.267 | 99.00th=[33817], 99.50th=[34341], 99.90th=[59507], 99.95th=[59507], 00:38:47.267 | 99.99th=[59507] 00:38:47.267 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=1940.16, stdev=63.88, samples=19 00:38:47.267 iops : min= 448, max= 512, avg=485.00, stdev=16.07, samples=19 00:38:47.267 lat (msec) : 50=99.67%, 100=0.33% 00:38:47.267 cpu : usr=98.53%, sys=0.86%, ctx=145, majf=0, minf=35 00:38:47.267 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:47.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.267 filename2: (groupid=0, jobs=1): err= 0: pid=2430415: Thu Dec 5 21:31:46 2024 00:38:47.267 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10012msec) 00:38:47.267 slat (nsec): min=5689, max=52560, avg=10033.26, stdev=6392.61 00:38:47.267 clat (usec): min=13012, max=46772, avg=32633.67, stdev=1765.97 00:38:47.267 lat (usec): min=13021, max=46778, avg=32643.70, stdev=1765.93 00:38:47.267 clat percentiles (usec): 00:38:47.267 | 1.00th=[21103], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:38:47.267 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:38:47.267 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.267 | 99.00th=[33817], 99.50th=[35390], 99.90th=[45351], 99.95th=[46924], 00:38:47.267 | 99.99th=[46924] 00:38:47.267 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1952.00, stdev=56.87, samples=20 00:38:47.267 iops : min= 480, max= 512, avg=488.00, stdev=14.22, samples=20 00:38:47.267 lat (msec) : 20=0.74%, 50=99.26% 00:38:47.267 cpu : usr=98.77%, sys=0.90%, ctx=73, majf=0, minf=39 00:38:47.267 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:47.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.267 filename2: (groupid=0, jobs=1): err= 0: pid=2430416: Thu Dec 5 21:31:46 2024 00:38:47.267 read: IOPS=485, BW=1942KiB/s (1989kB/s)(19.0MiB/10006msec) 00:38:47.267 slat (nsec): min=5675, max=96590, avg=17796.96, stdev=13836.04 00:38:47.267 clat (usec): min=11958, max=74016, avg=32886.33, stdev=2259.33 00:38:47.267 lat (usec): min=11965, max=74034, avg=32904.13, stdev=2259.02 00:38:47.267 clat percentiles (usec): 00:38:47.267 | 1.00th=[31589], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:38:47.267 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:38:47.267 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33424], 00:38:47.267 | 99.00th=[34341], 99.50th=[40633], 99.90th=[73925], 99.95th=[73925], 00:38:47.267 | 99.99th=[73925] 00:38:47.267 bw ( KiB/s): min= 1795, max= 2016, per=4.13%, avg=1939.55, stdev=48.70, samples=20 00:38:47.267 iops : min= 448, max= 504, avg=484.85, stdev=12.29, samples=20 00:38:47.267 lat (msec) : 20=0.21%, 50=99.36%, 100=0.43% 00:38:47.267 cpu : usr=98.62%, sys=0.95%, ctx=46, majf=0, minf=40 00:38:47.267 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=81.0%, 16=18.6%, 32=0.0%, >=64=0.0% 00:38:47.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 complete : 0=0.0%, 4=87.6%, 8=12.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 issued rwts: total=4859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.267 filename2: (groupid=0, jobs=1): err= 0: pid=2430417: Thu Dec 5 21:31:46 2024 00:38:47.267 read: IOPS=493, BW=1972KiB/s (2020kB/s)(19.3MiB/10027msec) 00:38:47.267 slat (nsec): min=5678, max=85974, avg=17791.36, stdev=14863.40 00:38:47.267 clat (usec): min=8927, max=34172, avg=32301.84, stdev=2801.37 00:38:47.267 lat (usec): min=8945, max=34179, avg=32319.63, stdev=2799.90 00:38:47.267 clat percentiles (usec): 00:38:47.267 | 1.00th=[14222], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:38:47.267 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:38:47.267 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.267 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:38:47.267 | 99.99th=[34341] 00:38:47.267 bw ( KiB/s): min= 1920, max= 2304, per=4.20%, avg=1971.20, stdev=96.50, samples=20 00:38:47.267 iops : min= 480, max= 576, avg=492.80, stdev=24.13, samples=20 00:38:47.267 lat (msec) : 10=0.32%, 20=1.62%, 50=98.06% 00:38:47.267 cpu : usr=98.90%, sys=0.84%, ctx=14, majf=0, minf=34 00:38:47.267 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:47.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.267 filename2: (groupid=0, jobs=1): err= 0: pid=2430418: Thu Dec 5 21:31:46 2024 00:38:47.267 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.1MiB/10016msec) 00:38:47.267 slat (nsec): min=5772, max=99347, avg=23349.72, stdev=14167.71 00:38:47.267 clat (usec): min=16689, max=39611, avg=32628.60, stdev=1210.69 00:38:47.267 lat (usec): min=16695, max=39627, avg=32651.95, stdev=1210.84 00:38:47.267 clat percentiles (usec): 00:38:47.267 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:38:47.267 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.267 | 70.00th=[32637], 80.00th=[32900], 90.00th=[32900], 95.00th=[33424], 00:38:47.267 | 99.00th=[34866], 99.50th=[35914], 99.90th=[39584], 99.95th=[39584], 00:38:47.267 | 99.99th=[39584] 00:38:47.267 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=1940.00, stdev=48.06, samples=19 00:38:47.267 iops : min= 479, max= 512, avg=485.00, stdev=12.01, samples=19 00:38:47.267 lat (msec) : 20=0.33%, 50=99.67% 00:38:47.267 cpu : usr=98.99%, sys=0.75%, ctx=14, majf=0, minf=35 00:38:47.267 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:47.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.267 filename2: (groupid=0, jobs=1): err= 0: pid=2430419: Thu Dec 5 21:31:46 2024 00:38:47.267 read: IOPS=487, BW=1951KiB/s (1997kB/s)(19.1MiB/10007msec) 00:38:47.267 slat (nsec): min=5913, max=81200, avg=23403.39, stdev=13283.16 00:38:47.267 clat (usec): min=19398, max=35759, avg=32590.84, stdev=1002.81 00:38:47.267 lat (usec): min=19410, max=35766, avg=32614.25, stdev=1002.99 00:38:47.267 clat percentiles (usec): 00:38:47.267 | 1.00th=[31327], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:38:47.267 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.267 | 70.00th=[32637], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.267 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:38:47.267 | 99.99th=[35914] 00:38:47.267 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1946.95, stdev=53.61, samples=19 00:38:47.267 iops : min= 480, max= 512, avg=486.74, stdev=13.40, samples=19 00:38:47.267 lat (msec) : 20=0.33%, 50=99.67% 00:38:47.267 cpu : usr=98.92%, sys=0.80%, ctx=15, majf=0, minf=24 00:38:47.267 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:47.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.267 filename2: (groupid=0, jobs=1): err= 0: pid=2430420: Thu Dec 5 21:31:46 2024 00:38:47.267 read: IOPS=486, BW=1944KiB/s (1991kB/s)(19.0MiB/10006msec) 00:38:47.267 slat (nsec): min=5782, max=93549, avg=22905.50, stdev=15420.01 00:38:47.267 clat (usec): min=21184, max=60774, avg=32715.56, stdev=1791.67 00:38:47.267 lat (usec): min=21199, max=60795, avg=32738.46, stdev=1790.65 00:38:47.267 clat percentiles (usec): 00:38:47.267 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:38:47.267 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:38:47.267 | 70.00th=[32900], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:38:47.267 | 99.00th=[33817], 99.50th=[34341], 99.90th=[60556], 99.95th=[60556], 00:38:47.267 | 99.99th=[60556] 00:38:47.267 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1940.00, stdev=64.26, samples=19 00:38:47.267 iops : min= 448, max= 512, avg=485.00, stdev=16.07, samples=19 00:38:47.267 lat (msec) : 50=99.67%, 100=0.33% 00:38:47.267 cpu : usr=98.67%, sys=0.97%, ctx=99, majf=0, minf=35 00:38:47.267 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:47.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:47.267 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:47.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:47.267 00:38:47.267 Run status group 0 (all jobs): 00:38:47.267 READ: bw=45.8MiB/s (48.0MB/s), 1936KiB/s-2041KiB/s (1982kB/s-2090kB/s), io=459MiB (482MB), run=10004-10027msec 00:38:47.267 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:47.267 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:47.267 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:47.267 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:47.267 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.268 bdev_null0 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.268 [2024-12-05 21:31:47.223695] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.268 bdev_null1 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:47.268 { 00:38:47.268 "params": { 00:38:47.268 "name": "Nvme$subsystem", 00:38:47.268 "trtype": "$TEST_TRANSPORT", 00:38:47.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:47.268 "adrfam": "ipv4", 00:38:47.268 "trsvcid": "$NVMF_PORT", 00:38:47.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:47.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:47.268 "hdgst": ${hdgst:-false}, 00:38:47.268 "ddgst": ${ddgst:-false} 00:38:47.268 }, 00:38:47.268 "method": "bdev_nvme_attach_controller" 00:38:47.268 } 00:38:47.268 EOF 00:38:47.268 )") 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:47.268 21:31:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:47.268 { 00:38:47.268 "params": { 00:38:47.268 "name": "Nvme$subsystem", 00:38:47.268 "trtype": "$TEST_TRANSPORT", 00:38:47.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:47.268 "adrfam": "ipv4", 00:38:47.268 "trsvcid": "$NVMF_PORT", 00:38:47.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:47.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:47.269 "hdgst": ${hdgst:-false}, 00:38:47.269 "ddgst": ${ddgst:-false} 00:38:47.269 }, 00:38:47.269 "method": "bdev_nvme_attach_controller" 00:38:47.269 } 00:38:47.269 EOF 00:38:47.269 )") 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:47.269 "params": { 00:38:47.269 "name": "Nvme0", 00:38:47.269 "trtype": "tcp", 00:38:47.269 "traddr": "10.0.0.2", 00:38:47.269 "adrfam": "ipv4", 00:38:47.269 "trsvcid": "4420", 00:38:47.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:47.269 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:47.269 "hdgst": false, 00:38:47.269 "ddgst": false 00:38:47.269 }, 00:38:47.269 "method": "bdev_nvme_attach_controller" 00:38:47.269 },{ 00:38:47.269 "params": { 00:38:47.269 "name": "Nvme1", 00:38:47.269 "trtype": "tcp", 00:38:47.269 "traddr": "10.0.0.2", 00:38:47.269 "adrfam": "ipv4", 00:38:47.269 "trsvcid": "4420", 00:38:47.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:47.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:47.269 "hdgst": false, 00:38:47.269 "ddgst": false 00:38:47.269 }, 00:38:47.269 "method": "bdev_nvme_attach_controller" 00:38:47.269 }' 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:47.269 21:31:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:47.269 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:47.269 ... 00:38:47.269 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:47.269 ... 00:38:47.269 fio-3.35 00:38:47.269 Starting 4 threads 00:38:52.605 00:38:52.605 filename0: (groupid=0, jobs=1): err= 0: pid=2432905: Thu Dec 5 21:31:53 2024 00:38:52.605 read: IOPS=2087, BW=16.3MiB/s (17.1MB/s)(81.6MiB/5003msec) 00:38:52.605 slat (nsec): min=5514, max=43419, avg=6220.17, stdev=2195.85 00:38:52.605 clat (usec): min=1306, max=6429, avg=3814.68, stdev=708.28 00:38:52.605 lat (usec): min=1322, max=6435, avg=3820.90, stdev=708.04 00:38:52.605 clat percentiles (usec): 00:38:52.605 | 1.00th=[ 2671], 5.00th=[ 3130], 10.00th=[ 3261], 20.00th=[ 3392], 00:38:52.605 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3687], 00:38:52.605 | 70.00th=[ 3785], 80.00th=[ 4015], 90.00th=[ 5276], 95.00th=[ 5276], 00:38:52.605 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 6063], 99.95th=[ 6325], 00:38:52.605 | 99.99th=[ 6456] 00:38:52.605 bw ( KiB/s): min=16464, max=17296, per=25.19%, avg=16730.67, stdev=240.80, samples=9 00:38:52.605 iops : min= 2058, max= 2162, avg=2091.33, stdev=30.10, samples=9 00:38:52.605 lat (msec) : 2=0.38%, 4=78.95%, 10=20.66% 00:38:52.605 cpu : usr=96.82%, sys=2.94%, ctx=8, majf=0, minf=62 00:38:52.605 IO depths : 1=0.1%, 2=0.1%, 4=72.7%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:52.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.605 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.605 issued rwts: total=10444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:52.605 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:52.605 filename0: (groupid=0, jobs=1): err= 0: pid=2432906: Thu Dec 5 21:31:53 2024 00:38:52.605 read: IOPS=2048, BW=16.0MiB/s (16.8MB/s)(80.1MiB/5002msec) 00:38:52.605 slat (nsec): min=5500, max=53499, avg=6155.42, stdev=2049.49 00:38:52.605 clat (usec): min=1498, max=7012, avg=3887.79, stdev=718.58 00:38:52.605 lat (usec): min=1504, max=7043, avg=3893.95, stdev=718.56 00:38:52.605 clat percentiles (usec): 00:38:52.605 | 1.00th=[ 2900], 5.00th=[ 3228], 10.00th=[ 3294], 20.00th=[ 3458], 00:38:52.605 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3720], 00:38:52.605 | 70.00th=[ 3818], 80.00th=[ 4146], 90.00th=[ 5276], 95.00th=[ 5342], 00:38:52.605 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 6390], 99.95th=[ 6587], 00:38:52.605 | 99.99th=[ 6980] 00:38:52.605 bw ( KiB/s): min=16096, max=16576, per=24.66%, avg=16375.11, stdev=159.62, samples=9 00:38:52.605 iops : min= 2012, max= 2072, avg=2046.89, stdev=19.95, samples=9 00:38:52.605 lat (msec) : 2=0.03%, 4=76.63%, 10=23.34% 00:38:52.605 cpu : usr=96.78%, sys=2.96%, ctx=31, majf=0, minf=38 00:38:52.605 IO depths : 1=0.1%, 2=0.1%, 4=72.6%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:52.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.605 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.605 issued rwts: total=10247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:52.605 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:52.605 filename1: (groupid=0, jobs=1): err= 0: pid=2432907: Thu Dec 5 21:31:53 2024 00:38:52.605 read: IOPS=2103, BW=16.4MiB/s (17.2MB/s)(82.2MiB/5001msec) 00:38:52.605 slat (nsec): min=5509, max=53785, avg=6124.14, stdev=1962.68 00:38:52.605 clat (usec): min=1661, max=6243, avg=3786.61, stdev=712.77 00:38:52.605 lat (usec): min=1667, max=6249, avg=3792.73, stdev=712.65 00:38:52.605 clat percentiles (usec): 00:38:52.605 | 1.00th=[ 2638], 5.00th=[ 2999], 10.00th=[ 3195], 20.00th=[ 3359], 00:38:52.605 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3556], 60.00th=[ 3654], 00:38:52.605 | 70.00th=[ 3720], 80.00th=[ 4047], 90.00th=[ 5276], 95.00th=[ 5276], 00:38:52.605 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 5932], 99.95th=[ 6063], 00:38:52.605 | 99.99th=[ 6259] 00:38:52.605 bw ( KiB/s): min=16368, max=17216, per=25.28%, avg=16791.11, stdev=240.01, samples=9 00:38:52.605 iops : min= 2046, max= 2152, avg=2098.89, stdev=30.00, samples=9 00:38:52.605 lat (msec) : 2=0.03%, 4=79.01%, 10=20.96% 00:38:52.605 cpu : usr=96.94%, sys=2.82%, ctx=8, majf=0, minf=38 00:38:52.605 IO depths : 1=0.1%, 2=0.2%, 4=72.5%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:52.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.605 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.605 issued rwts: total=10520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:52.605 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:52.605 filename1: (groupid=0, jobs=1): err= 0: pid=2432908: Thu Dec 5 21:31:53 2024 00:38:52.605 read: IOPS=2064, BW=16.1MiB/s (16.9MB/s)(80.7MiB/5001msec) 00:38:52.605 slat (nsec): min=5497, max=56045, avg=6185.72, stdev=1847.67 00:38:52.605 clat (usec): min=1416, max=8034, avg=3858.17, stdev=718.33 00:38:52.605 lat (usec): min=1422, max=8060, avg=3864.35, stdev=718.32 00:38:52.605 clat percentiles (usec): 00:38:52.605 | 1.00th=[ 2868], 5.00th=[ 3195], 10.00th=[ 3294], 20.00th=[ 3425], 00:38:52.605 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3720], 00:38:52.605 | 70.00th=[ 3818], 80.00th=[ 4080], 90.00th=[ 5276], 95.00th=[ 5276], 00:38:52.605 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 6128], 99.95th=[ 7635], 00:38:52.605 | 99.99th=[ 7701] 00:38:52.605 bw ( KiB/s): min=16432, max=16592, per=24.86%, avg=16510.22, stdev=52.12, samples=9 00:38:52.606 iops : min= 2054, max= 2074, avg=2063.78, stdev= 6.51, samples=9 00:38:52.606 lat (msec) : 2=0.03%, 4=78.01%, 10=21.96% 00:38:52.606 cpu : usr=97.14%, sys=2.60%, ctx=7, majf=0, minf=47 00:38:52.606 IO depths : 1=0.1%, 2=0.1%, 4=72.7%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:52.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.606 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:52.606 issued rwts: total=10324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:52.606 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:52.606 00:38:52.606 Run status group 0 (all jobs): 00:38:52.606 READ: bw=64.9MiB/s (68.0MB/s), 16.0MiB/s-16.4MiB/s (16.8MB/s-17.2MB/s), io=324MiB (340MB), run=5001-5003msec 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.606 00:38:52.606 real 0m24.238s 00:38:52.606 user 5m17.767s 00:38:52.606 sys 0m4.467s 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:52.606 21:31:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:52.606 ************************************ 00:38:52.606 END TEST fio_dif_rand_params 00:38:52.606 ************************************ 00:38:52.606 21:31:53 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:52.606 21:31:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:52.606 21:31:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:52.606 21:31:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:52.606 ************************************ 00:38:52.606 START TEST fio_dif_digest 00:38:52.606 ************************************ 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:52.606 bdev_null0 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:52.606 [2024-12-05 21:31:53.654236] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:52.606 { 00:38:52.606 "params": { 00:38:52.606 "name": "Nvme$subsystem", 00:38:52.606 "trtype": "$TEST_TRANSPORT", 00:38:52.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:52.606 "adrfam": "ipv4", 00:38:52.606 "trsvcid": "$NVMF_PORT", 00:38:52.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:52.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:52.606 "hdgst": ${hdgst:-false}, 00:38:52.606 "ddgst": ${ddgst:-false} 00:38:52.606 }, 00:38:52.606 "method": "bdev_nvme_attach_controller" 00:38:52.606 } 00:38:52.606 EOF 00:38:52.606 )") 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:52.606 "params": { 00:38:52.606 "name": "Nvme0", 00:38:52.606 "trtype": "tcp", 00:38:52.606 "traddr": "10.0.0.2", 00:38:52.606 "adrfam": "ipv4", 00:38:52.606 "trsvcid": "4420", 00:38:52.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:52.606 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:52.606 "hdgst": true, 00:38:52.606 "ddgst": true 00:38:52.606 }, 00:38:52.606 "method": "bdev_nvme_attach_controller" 00:38:52.606 }' 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:52.606 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:52.607 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:52.607 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:52.607 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:52.607 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:52.607 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:52.607 21:31:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:52.869 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:52.870 ... 00:38:52.870 fio-3.35 00:38:52.870 Starting 3 threads 00:39:05.107 00:39:05.107 filename0: (groupid=0, jobs=1): err= 0: pid=2434106: Thu Dec 5 21:32:04 2024 00:39:05.107 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(249MiB/10050msec) 00:39:05.107 slat (nsec): min=5897, max=32140, avg=6677.25, stdev=1170.16 00:39:05.107 clat (usec): min=9547, max=57273, avg=15119.51, stdev=6421.82 00:39:05.107 lat (usec): min=9553, max=57279, avg=15126.18, stdev=6421.82 00:39:05.107 clat percentiles (usec): 00:39:05.107 | 1.00th=[11338], 5.00th=[12387], 10.00th=[12780], 20.00th=[13304], 00:39:05.107 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:39:05.107 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15664], 95.00th=[16319], 00:39:05.107 | 99.00th=[55313], 99.50th=[55313], 99.90th=[56886], 99.95th=[57410], 00:39:05.107 | 99.99th=[57410] 00:39:05.107 bw ( KiB/s): min=22272, max=27904, per=30.96%, avg=25446.40, stdev=1641.09, samples=20 00:39:05.107 iops : min= 174, max= 218, avg=198.80, stdev=12.82, samples=20 00:39:05.107 lat (msec) : 10=0.10%, 20=97.39%, 100=2.51% 00:39:05.107 cpu : usr=95.13%, sys=4.65%, ctx=20, majf=0, minf=88 00:39:05.107 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:05.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.107 issued rwts: total=1990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:05.107 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:05.107 filename0: (groupid=0, jobs=1): err= 0: pid=2434107: Thu Dec 5 21:32:04 2024 00:39:05.107 read: IOPS=239, BW=29.9MiB/s (31.3MB/s)(299MiB/10006msec) 00:39:05.107 slat (nsec): min=5943, max=33173, avg=6676.29, stdev=1055.82 00:39:05.107 clat (usec): min=7635, max=15824, avg=12533.64, stdev=1409.31 00:39:05.107 lat (usec): min=7642, max=15830, avg=12540.31, stdev=1409.34 00:39:05.107 clat percentiles (usec): 00:39:05.107 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[10683], 20.00th=[11731], 00:39:05.107 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:39:05.107 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14091], 95.00th=[14353], 00:39:05.107 | 99.00th=[15139], 99.50th=[15401], 99.90th=[15664], 99.95th=[15795], 00:39:05.107 | 99.99th=[15795] 00:39:05.107 bw ( KiB/s): min=29184, max=32256, per=37.32%, avg=30676.16, stdev=1044.18, samples=19 00:39:05.107 iops : min= 228, max= 252, avg=239.63, stdev= 8.12, samples=19 00:39:05.107 lat (msec) : 10=8.02%, 20=91.98% 00:39:05.107 cpu : usr=94.71%, sys=5.06%, ctx=23, majf=0, minf=166 00:39:05.107 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:05.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.107 issued rwts: total=2393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:05.107 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:05.107 filename0: (groupid=0, jobs=1): err= 0: pid=2434108: Thu Dec 5 21:32:04 2024 00:39:05.107 read: IOPS=206, BW=25.8MiB/s (27.0MB/s)(259MiB/10045msec) 00:39:05.107 slat (nsec): min=5916, max=31656, avg=7269.39, stdev=1380.43 00:39:05.107 clat (usec): min=8265, max=57552, avg=14528.46, stdev=2989.41 00:39:05.107 lat (usec): min=8272, max=57560, avg=14535.73, stdev=2989.47 00:39:05.107 clat percentiles (usec): 00:39:05.107 | 1.00th=[ 9503], 5.00th=[10552], 10.00th=[12387], 20.00th=[13566], 00:39:05.107 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:39:05.107 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16188], 95.00th=[16581], 00:39:05.107 | 99.00th=[17957], 99.50th=[18482], 99.90th=[56361], 99.95th=[57410], 00:39:05.107 | 99.99th=[57410] 00:39:05.107 bw ( KiB/s): min=24832, max=28416, per=32.21%, avg=26470.40, stdev=840.48, samples=20 00:39:05.107 iops : min= 194, max= 222, avg=206.80, stdev= 6.57, samples=20 00:39:05.107 lat (msec) : 10=2.51%, 20=97.10%, 50=0.05%, 100=0.34% 00:39:05.107 cpu : usr=96.44%, sys=3.33%, ctx=11, majf=0, minf=154 00:39:05.107 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:05.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:05.107 issued rwts: total=2070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:05.107 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:05.107 00:39:05.107 Run status group 0 (all jobs): 00:39:05.107 READ: bw=80.3MiB/s (84.2MB/s), 24.8MiB/s-29.9MiB/s (26.0MB/s-31.3MB/s), io=807MiB (846MB), run=10006-10050msec 00:39:05.107 21:32:04 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:05.107 21:32:04 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:05.107 21:32:04 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:05.107 21:32:04 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:05.107 21:32:04 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:05.107 21:32:04 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:05.107 21:32:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.108 21:32:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:05.108 21:32:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.108 21:32:04 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:05.108 21:32:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.108 21:32:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:05.108 21:32:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.108 00:39:05.108 real 0m11.185s 00:39:05.108 user 0m44.802s 00:39:05.108 sys 0m1.632s 00:39:05.108 21:32:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:05.108 21:32:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:05.108 ************************************ 00:39:05.108 END TEST fio_dif_digest 00:39:05.108 ************************************ 00:39:05.108 21:32:04 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:05.108 21:32:04 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:05.108 21:32:04 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:05.108 21:32:04 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:39:05.108 21:32:04 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:05.108 21:32:04 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:39:05.108 21:32:04 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:05.108 21:32:04 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:05.108 rmmod nvme_tcp 00:39:05.108 rmmod nvme_fabrics 00:39:05.108 rmmod nvme_keyring 00:39:05.108 21:32:04 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:05.108 21:32:04 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:39:05.108 21:32:04 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:39:05.108 21:32:04 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2423938 ']' 00:39:05.108 21:32:04 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2423938 00:39:05.108 21:32:04 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2423938 ']' 00:39:05.108 21:32:04 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2423938 00:39:05.108 21:32:04 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:39:05.108 21:32:04 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:05.108 21:32:04 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2423938 00:39:05.108 21:32:04 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:05.108 21:32:04 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:05.108 21:32:04 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2423938' 00:39:05.108 killing process with pid 2423938 00:39:05.108 21:32:04 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2423938 00:39:05.108 21:32:04 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2423938 00:39:05.108 21:32:05 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:05.108 21:32:05 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:07.656 Waiting for block devices as requested 00:39:07.656 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:07.656 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:07.983 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:07.983 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:07.983 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:07.983 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:08.298 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:08.298 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:08.298 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:08.568 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:08.568 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:08.568 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:08.827 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:08.827 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:08.827 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:08.827 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:09.086 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:09.371 21:32:10 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:09.371 21:32:10 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:09.371 21:32:10 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:39:09.371 21:32:10 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:39:09.371 21:32:10 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:09.371 21:32:10 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:39:09.371 21:32:10 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:09.371 21:32:10 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:09.371 21:32:10 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:09.371 21:32:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:09.371 21:32:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:11.284 21:32:12 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:11.284 00:39:11.284 real 1m20.282s 00:39:11.284 user 8m2.077s 00:39:11.284 sys 0m23.225s 00:39:11.284 21:32:12 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:11.284 21:32:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:11.284 ************************************ 00:39:11.284 END TEST nvmf_dif 00:39:11.284 ************************************ 00:39:11.545 21:32:12 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:11.545 21:32:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:11.545 21:32:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:11.545 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:39:11.545 ************************************ 00:39:11.545 START TEST nvmf_abort_qd_sizes 00:39:11.545 ************************************ 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:11.545 * Looking for test storage... 00:39:11.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:39:11.545 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:11.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.546 --rc genhtml_branch_coverage=1 00:39:11.546 --rc genhtml_function_coverage=1 00:39:11.546 --rc genhtml_legend=1 00:39:11.546 --rc geninfo_all_blocks=1 00:39:11.546 --rc geninfo_unexecuted_blocks=1 00:39:11.546 00:39:11.546 ' 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:11.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.546 --rc genhtml_branch_coverage=1 00:39:11.546 --rc genhtml_function_coverage=1 00:39:11.546 --rc genhtml_legend=1 00:39:11.546 --rc geninfo_all_blocks=1 00:39:11.546 --rc geninfo_unexecuted_blocks=1 00:39:11.546 00:39:11.546 ' 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:11.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.546 --rc genhtml_branch_coverage=1 00:39:11.546 --rc genhtml_function_coverage=1 00:39:11.546 --rc genhtml_legend=1 00:39:11.546 --rc geninfo_all_blocks=1 00:39:11.546 --rc geninfo_unexecuted_blocks=1 00:39:11.546 00:39:11.546 ' 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:11.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.546 --rc genhtml_branch_coverage=1 00:39:11.546 --rc genhtml_function_coverage=1 00:39:11.546 --rc genhtml_legend=1 00:39:11.546 --rc geninfo_all_blocks=1 00:39:11.546 --rc geninfo_unexecuted_blocks=1 00:39:11.546 00:39:11.546 ' 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:11.546 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:11.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:11.809 21:32:12 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:11.809 21:32:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:11.809 21:32:13 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:39:11.809 21:32:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:19.951 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:19.951 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:19.951 Found net devices under 0000:31:00.0: cvl_0_0 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:19.951 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:19.952 Found net devices under 0000:31:00.1: cvl_0_1 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:19.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:19.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:39:19.952 00:39:19.952 --- 10.0.0.2 ping statistics --- 00:39:19.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:19.952 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:19.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:19.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:39:19.952 00:39:19.952 --- 10.0.0.1 ping statistics --- 00:39:19.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:19.952 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:39:19.952 21:32:20 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:24.161 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:24.161 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2444477 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2444477 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2444477 ']' 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:24.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:24.161 21:32:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:24.161 [2024-12-05 21:32:25.384599] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:39:24.161 [2024-12-05 21:32:25.384647] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:24.161 [2024-12-05 21:32:25.468331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:24.161 [2024-12-05 21:32:25.505423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:24.161 [2024-12-05 21:32:25.505452] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:24.161 [2024-12-05 21:32:25.505460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:24.161 [2024-12-05 21:32:25.505467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:24.161 [2024-12-05 21:32:25.505473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:24.161 [2024-12-05 21:32:25.506913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:24.161 [2024-12-05 21:32:25.507134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:24.161 [2024-12-05 21:32:25.507135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:24.161 [2024-12-05 21:32:25.506943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:25.101 21:32:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:25.101 ************************************ 00:39:25.101 START TEST spdk_target_abort 00:39:25.101 ************************************ 00:39:25.101 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:39:25.101 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:25.101 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:39:25.101 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.101 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:25.363 spdk_targetn1 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:25.363 [2024-12-05 21:32:26.575904] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:25.363 [2024-12-05 21:32:26.628212] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:25.363 21:32:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:25.624 [2024-12-05 21:32:26.825286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:280 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:39:25.624 [2024-12-05 21:32:26.825312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0025 p:1 m:0 dnr:0 00:39:25.624 [2024-12-05 21:32:26.855325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1328 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:39:25.624 [2024-12-05 21:32:26.855343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00a8 p:1 m:0 dnr:0 00:39:25.624 [2024-12-05 21:32:26.886286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2384 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:39:25.624 [2024-12-05 21:32:26.886304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:25.624 [2024-12-05 21:32:26.888049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2520 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:39:25.624 [2024-12-05 21:32:26.888063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:39:25.624 [2024-12-05 21:32:26.888417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2544 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:39:25.624 [2024-12-05 21:32:26.888432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:28.926 Initializing NVMe Controllers 00:39:28.926 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:28.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:28.926 Initialization complete. Launching workers. 00:39:28.926 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12533, failed: 5 00:39:28.926 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3393, failed to submit 9145 00:39:28.926 success 733, unsuccessful 2660, failed 0 00:39:28.926 21:32:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:28.926 21:32:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:28.926 [2024-12-05 21:32:30.215024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:888 len:8 PRP1 0x200004e44000 PRP2 0x0 00:39:28.926 [2024-12-05 21:32:30.215070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0077 p:1 m:0 dnr:0 00:39:28.926 [2024-12-05 21:32:30.231011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:1288 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:39:28.926 [2024-12-05 21:32:30.231035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:00a4 p:1 m:0 dnr:0 00:39:28.926 [2024-12-05 21:32:30.237178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:1432 len:8 PRP1 0x200004e5e000 PRP2 0x0 00:39:28.926 [2024-12-05 21:32:30.237201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:00bd p:1 m:0 dnr:0 00:39:28.926 [2024-12-05 21:32:30.261073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:2096 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:39:28.926 [2024-12-05 21:32:30.261096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:39:28.926 [2024-12-05 21:32:30.329989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:3744 len:8 PRP1 0x200004e48000 PRP2 0x0 00:39:28.926 [2024-12-05 21:32:30.330013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:00db p:0 m:0 dnr:0 00:39:30.838 [2024-12-05 21:32:31.780222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:37024 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:39:30.838 [2024-12-05 21:32:31.780259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:001e p:1 m:0 dnr:0 00:39:32.255 Initializing NVMe Controllers 00:39:32.255 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:32.255 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:32.255 Initialization complete. Launching workers. 00:39:32.255 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8684, failed: 6 00:39:32.255 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1203, failed to submit 7487 00:39:32.255 success 341, unsuccessful 862, failed 0 00:39:32.255 21:32:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:32.255 21:32:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:33.195 [2024-12-05 21:32:34.269436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:175 nsid:1 lba:90952 len:8 PRP1 0x200004b12000 PRP2 0x0 00:39:33.195 [2024-12-05 21:32:34.269469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:175 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:39:35.107 Initializing NVMe Controllers 00:39:35.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:35.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:35.107 Initialization complete. Launching workers. 00:39:35.107 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42169, failed: 1 00:39:35.107 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2611, failed to submit 39559 00:39:35.107 success 589, unsuccessful 2022, failed 0 00:39:35.107 21:32:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:35.107 21:32:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.107 21:32:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:35.107 21:32:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.107 21:32:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:35.107 21:32:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.107 21:32:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:37.020 21:32:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.020 21:32:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2444477 00:39:37.020 21:32:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2444477 ']' 00:39:37.020 21:32:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2444477 00:39:37.020 21:32:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:39:37.020 21:32:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:37.020 21:32:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2444477 00:39:37.020 21:32:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:37.020 21:32:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:37.020 21:32:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2444477' 00:39:37.020 killing process with pid 2444477 00:39:37.020 21:32:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2444477 00:39:37.020 21:32:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2444477 00:39:37.281 00:39:37.281 real 0m12.258s 00:39:37.281 user 0m49.926s 00:39:37.281 sys 0m1.909s 00:39:37.281 21:32:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:37.281 21:32:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:37.281 ************************************ 00:39:37.281 END TEST spdk_target_abort 00:39:37.281 ************************************ 00:39:37.281 21:32:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:37.281 21:32:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:37.281 21:32:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:37.281 21:32:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:37.281 ************************************ 00:39:37.281 START TEST kernel_target_abort 00:39:37.281 ************************************ 00:39:37.281 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:39:37.281 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:37.281 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:39:37.281 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:37.281 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:37.281 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:37.281 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:37.281 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:37.281 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:37.281 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:37.281 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:37.282 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:37.282 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:37.282 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:37.282 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:39:37.282 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:37.282 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:37.282 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:37.282 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:39:37.282 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:39:37.282 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:39:37.282 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:37.282 21:32:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:41.484 Waiting for block devices as requested 00:39:41.484 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:41.484 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:41.484 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:41.484 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:41.484 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:41.484 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:41.484 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:41.744 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:41.744 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:41.745 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:42.005 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:42.005 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:42.005 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:42.266 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:42.266 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:42.266 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:42.266 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:42.846 21:32:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:39:42.846 21:32:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:42.846 21:32:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:39:42.846 21:32:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:39:42.846 21:32:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:42.846 21:32:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:42.846 21:32:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:39:42.846 21:32:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:42.846 21:32:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:42.846 No valid GPT data, bailing 00:39:42.846 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:42.846 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:42.846 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:42.846 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:39:42.846 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:39:42.846 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:42.846 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:42.846 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:42.846 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:42.846 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:39:42.846 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:39:42.846 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:39:42.846 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:39:42.847 00:39:42.847 Discovery Log Number of Records 2, Generation counter 2 00:39:42.847 =====Discovery Log Entry 0====== 00:39:42.847 trtype: tcp 00:39:42.847 adrfam: ipv4 00:39:42.847 subtype: current discovery subsystem 00:39:42.847 treq: not specified, sq flow control disable supported 00:39:42.847 portid: 1 00:39:42.847 trsvcid: 4420 00:39:42.847 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:42.847 traddr: 10.0.0.1 00:39:42.847 eflags: none 00:39:42.847 sectype: none 00:39:42.847 =====Discovery Log Entry 1====== 00:39:42.847 trtype: tcp 00:39:42.847 adrfam: ipv4 00:39:42.847 subtype: nvme subsystem 00:39:42.847 treq: not specified, sq flow control disable supported 00:39:42.847 portid: 1 00:39:42.847 trsvcid: 4420 00:39:42.847 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:42.847 traddr: 10.0.0.1 00:39:42.847 eflags: none 00:39:42.847 sectype: none 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:42.847 21:32:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:46.142 Initializing NVMe Controllers 00:39:46.142 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:46.142 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:46.142 Initialization complete. Launching workers. 00:39:46.142 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66948, failed: 0 00:39:46.142 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66948, failed to submit 0 00:39:46.142 success 0, unsuccessful 66948, failed 0 00:39:46.142 21:32:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:46.142 21:32:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:49.529 Initializing NVMe Controllers 00:39:49.529 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:49.529 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:49.529 Initialization complete. Launching workers. 00:39:49.529 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107862, failed: 0 00:39:49.529 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27134, failed to submit 80728 00:39:49.529 success 0, unsuccessful 27134, failed 0 00:39:49.529 21:32:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:49.529 21:32:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:52.071 Initializing NVMe Controllers 00:39:52.071 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:52.071 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:52.071 Initialization complete. Launching workers. 00:39:52.071 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102074, failed: 0 00:39:52.071 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25518, failed to submit 76556 00:39:52.071 success 0, unsuccessful 25518, failed 0 00:39:52.071 21:32:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:52.071 21:32:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:52.071 21:32:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:39:52.071 21:32:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:52.071 21:32:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:52.071 21:32:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:52.071 21:32:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:52.071 21:32:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:39:52.332 21:32:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:39:52.332 21:32:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:56.539 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:56.539 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:56.539 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:56.539 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:56.539 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:56.539 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:56.539 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:56.539 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:56.539 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:56.539 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:56.539 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:56.539 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:56.539 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:56.539 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:56.539 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:56.539 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:57.926 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:58.187 00:39:58.187 real 0m21.016s 00:39:58.187 user 0m10.182s 00:39:58.187 sys 0m6.712s 00:39:58.187 21:32:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:58.187 21:32:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:58.187 ************************************ 00:39:58.187 END TEST kernel_target_abort 00:39:58.187 ************************************ 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:58.449 rmmod nvme_tcp 00:39:58.449 rmmod nvme_fabrics 00:39:58.449 rmmod nvme_keyring 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2444477 ']' 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2444477 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2444477 ']' 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2444477 00:39:58.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2444477) - No such process 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2444477 is not found' 00:39:58.449 Process with pid 2444477 is not found 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:58.449 21:32:59 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:01.751 Waiting for block devices as requested 00:40:01.751 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:01.751 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:01.751 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:01.751 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:01.751 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:01.751 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:02.013 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:02.013 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:02.013 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:02.274 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:02.274 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:02.536 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:02.536 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:02.536 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:02.536 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:02.797 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:02.797 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:03.060 21:33:04 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:03.060 21:33:04 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:03.060 21:33:04 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:40:03.060 21:33:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:40:03.060 21:33:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:03.060 21:33:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:40:03.060 21:33:04 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:03.060 21:33:04 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:03.060 21:33:04 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:03.060 21:33:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:03.060 21:33:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:05.610 21:33:06 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:05.610 00:40:05.610 real 0m53.679s 00:40:05.610 user 1m5.666s 00:40:05.610 sys 0m19.981s 00:40:05.610 21:33:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:05.610 21:33:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:05.610 ************************************ 00:40:05.610 END TEST nvmf_abort_qd_sizes 00:40:05.610 ************************************ 00:40:05.610 21:33:06 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:05.610 21:33:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:05.610 21:33:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:05.610 21:33:06 -- common/autotest_common.sh@10 -- # set +x 00:40:05.610 ************************************ 00:40:05.610 START TEST keyring_file 00:40:05.610 ************************************ 00:40:05.610 21:33:06 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:05.610 * Looking for test storage... 00:40:05.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:05.610 21:33:06 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:05.610 21:33:06 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:40:05.610 21:33:06 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:05.610 21:33:06 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@345 -- # : 1 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@353 -- # local d=1 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@355 -- # echo 1 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@353 -- # local d=2 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@355 -- # echo 2 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:05.610 21:33:06 keyring_file -- scripts/common.sh@368 -- # return 0 00:40:05.610 21:33:06 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:05.610 21:33:06 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:05.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.610 --rc genhtml_branch_coverage=1 00:40:05.610 --rc genhtml_function_coverage=1 00:40:05.610 --rc genhtml_legend=1 00:40:05.610 --rc geninfo_all_blocks=1 00:40:05.610 --rc geninfo_unexecuted_blocks=1 00:40:05.610 00:40:05.610 ' 00:40:05.610 21:33:06 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:05.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.610 --rc genhtml_branch_coverage=1 00:40:05.610 --rc genhtml_function_coverage=1 00:40:05.610 --rc genhtml_legend=1 00:40:05.610 --rc geninfo_all_blocks=1 00:40:05.610 --rc geninfo_unexecuted_blocks=1 00:40:05.610 00:40:05.610 ' 00:40:05.610 21:33:06 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:05.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.610 --rc genhtml_branch_coverage=1 00:40:05.610 --rc genhtml_function_coverage=1 00:40:05.610 --rc genhtml_legend=1 00:40:05.610 --rc geninfo_all_blocks=1 00:40:05.610 --rc geninfo_unexecuted_blocks=1 00:40:05.610 00:40:05.610 ' 00:40:05.610 21:33:06 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:05.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.610 --rc genhtml_branch_coverage=1 00:40:05.610 --rc genhtml_function_coverage=1 00:40:05.610 --rc genhtml_legend=1 00:40:05.610 --rc geninfo_all_blocks=1 00:40:05.610 --rc geninfo_unexecuted_blocks=1 00:40:05.610 00:40:05.610 ' 00:40:05.610 21:33:06 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:05.610 21:33:06 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:05.610 21:33:06 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:05.610 21:33:06 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:05.610 21:33:06 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:05.610 21:33:06 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:05.610 21:33:06 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:05.610 21:33:06 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:05.610 21:33:06 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:05.610 21:33:06 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:05.611 21:33:06 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:40:05.611 21:33:06 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:05.611 21:33:06 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:05.611 21:33:06 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:05.611 21:33:06 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.611 21:33:06 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.611 21:33:06 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.611 21:33:06 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:05.611 21:33:06 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@51 -- # : 0 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:05.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:05.611 21:33:06 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:05.611 21:33:06 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:05.611 21:33:06 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:05.611 21:33:06 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:05.611 21:33:06 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:05.611 21:33:06 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WJu6RtWJP3 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WJu6RtWJP3 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WJu6RtWJP3 00:40:05.611 21:33:06 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.WJu6RtWJP3 00:40:05.611 21:33:06 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.n2rVOcBtWm 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:05.611 21:33:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.n2rVOcBtWm 00:40:05.611 21:33:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.n2rVOcBtWm 00:40:05.611 21:33:06 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.n2rVOcBtWm 00:40:05.611 21:33:06 keyring_file -- keyring/file.sh@30 -- # tgtpid=2455269 00:40:05.611 21:33:06 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2455269 00:40:05.611 21:33:06 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:05.611 21:33:06 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2455269 ']' 00:40:05.611 21:33:06 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:05.611 21:33:06 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:05.611 21:33:06 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:05.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:05.611 21:33:06 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:05.611 21:33:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:05.611 [2024-12-05 21:33:06.958274] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:40:05.611 [2024-12-05 21:33:06.958352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2455269 ] 00:40:05.611 [2024-12-05 21:33:07.041464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:05.872 [2024-12-05 21:33:07.085966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:06.444 21:33:07 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:06.444 [2024-12-05 21:33:07.757477] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:06.444 null0 00:40:06.444 [2024-12-05 21:33:07.789525] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:06.444 [2024-12-05 21:33:07.789838] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:06.444 21:33:07 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:06.444 [2024-12-05 21:33:07.821601] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:06.444 request: 00:40:06.444 { 00:40:06.444 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:06.444 "secure_channel": false, 00:40:06.444 "listen_address": { 00:40:06.444 "trtype": "tcp", 00:40:06.444 "traddr": "127.0.0.1", 00:40:06.444 "trsvcid": "4420" 00:40:06.444 }, 00:40:06.444 "method": "nvmf_subsystem_add_listener", 00:40:06.444 "req_id": 1 00:40:06.444 } 00:40:06.444 Got JSON-RPC error response 00:40:06.444 response: 00:40:06.444 { 00:40:06.444 "code": -32602, 00:40:06.444 "message": "Invalid parameters" 00:40:06.444 } 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:06.444 21:33:07 keyring_file -- keyring/file.sh@47 -- # bperfpid=2455501 00:40:06.444 21:33:07 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2455501 /var/tmp/bperf.sock 00:40:06.444 21:33:07 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2455501 ']' 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:06.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:06.444 21:33:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:06.705 [2024-12-05 21:33:07.881065] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:40:06.705 [2024-12-05 21:33:07.881115] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2455501 ] 00:40:06.706 [2024-12-05 21:33:07.976937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.706 [2024-12-05 21:33:08.012941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:07.278 21:33:08 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:07.278 21:33:08 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:07.278 21:33:08 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WJu6RtWJP3 00:40:07.278 21:33:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WJu6RtWJP3 00:40:07.541 21:33:08 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.n2rVOcBtWm 00:40:07.541 21:33:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.n2rVOcBtWm 00:40:07.802 21:33:09 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:40:07.802 21:33:09 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:07.802 21:33:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:07.802 21:33:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:07.802 21:33:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:07.802 21:33:09 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.WJu6RtWJP3 == \/\t\m\p\/\t\m\p\.\W\J\u\6\R\t\W\J\P\3 ]] 00:40:07.802 21:33:09 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:40:07.802 21:33:09 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:40:07.802 21:33:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:07.802 21:33:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:07.802 21:33:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:08.062 21:33:09 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.n2rVOcBtWm == \/\t\m\p\/\t\m\p\.\n\2\r\V\O\c\B\t\W\m ]] 00:40:08.062 21:33:09 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:40:08.062 21:33:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:08.062 21:33:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:08.062 21:33:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:08.062 21:33:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:08.062 21:33:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:08.323 21:33:09 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:08.323 21:33:09 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:40:08.323 21:33:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:08.323 21:33:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:08.323 21:33:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:08.323 21:33:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:08.323 21:33:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:08.323 21:33:09 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:40:08.323 21:33:09 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:08.323 21:33:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:08.584 [2024-12-05 21:33:09.848381] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:08.584 nvme0n1 00:40:08.584 21:33:09 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:40:08.584 21:33:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:08.584 21:33:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:08.584 21:33:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:08.584 21:33:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:08.584 21:33:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:08.843 21:33:10 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:40:08.843 21:33:10 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:40:08.843 21:33:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:08.843 21:33:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:08.843 21:33:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:08.843 21:33:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:08.843 21:33:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:09.103 21:33:10 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:40:09.103 21:33:10 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:09.103 Running I/O for 1 seconds... 00:40:10.071 15327.00 IOPS, 59.87 MiB/s 00:40:10.071 Latency(us) 00:40:10.071 [2024-12-05T20:33:11.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:10.071 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:40:10.071 nvme0n1 : 1.01 15343.42 59.94 0.00 0.00 8309.42 5625.17 14964.05 00:40:10.071 [2024-12-05T20:33:11.508Z] =================================================================================================================== 00:40:10.071 [2024-12-05T20:33:11.508Z] Total : 15343.42 59.94 0.00 0.00 8309.42 5625.17 14964.05 00:40:10.071 { 00:40:10.071 "results": [ 00:40:10.071 { 00:40:10.071 "job": "nvme0n1", 00:40:10.071 "core_mask": "0x2", 00:40:10.071 "workload": "randrw", 00:40:10.071 "percentage": 50, 00:40:10.071 "status": "finished", 00:40:10.071 "queue_depth": 128, 00:40:10.071 "io_size": 4096, 00:40:10.071 "runtime": 1.007272, 00:40:10.071 "iops": 15343.422630630059, 00:40:10.071 "mibps": 59.93524465089867, 00:40:10.071 "io_failed": 0, 00:40:10.071 "io_timeout": 0, 00:40:10.071 "avg_latency_us": 8309.419550091665, 00:40:10.071 "min_latency_us": 5625.173333333333, 00:40:10.071 "max_latency_us": 14964.053333333333 00:40:10.071 } 00:40:10.071 ], 00:40:10.071 "core_count": 1 00:40:10.071 } 00:40:10.071 21:33:11 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:10.071 21:33:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:10.331 21:33:11 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:40:10.331 21:33:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:10.331 21:33:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:10.331 21:33:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:10.331 21:33:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:10.331 21:33:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:10.591 21:33:11 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:40:10.591 21:33:11 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:40:10.591 21:33:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:10.591 21:33:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:10.591 21:33:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:10.591 21:33:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:10.591 21:33:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:10.591 21:33:11 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:40:10.591 21:33:11 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:10.591 21:33:11 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:10.591 21:33:11 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:10.591 21:33:11 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:10.591 21:33:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:10.591 21:33:11 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:10.591 21:33:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:10.591 21:33:11 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:10.591 21:33:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:10.851 [2024-12-05 21:33:12.113006] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:10.851 [2024-12-05 21:33:12.113448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6b630 (107): Transport endpoint is not connected 00:40:10.851 [2024-12-05 21:33:12.114444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6b630 (9): Bad file descriptor 00:40:10.851 [2024-12-05 21:33:12.115446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:10.851 [2024-12-05 21:33:12.115459] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:10.851 [2024-12-05 21:33:12.115465] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:10.852 [2024-12-05 21:33:12.115471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:10.852 request: 00:40:10.852 { 00:40:10.852 "name": "nvme0", 00:40:10.852 "trtype": "tcp", 00:40:10.852 "traddr": "127.0.0.1", 00:40:10.852 "adrfam": "ipv4", 00:40:10.852 "trsvcid": "4420", 00:40:10.852 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:10.852 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:10.852 "prchk_reftag": false, 00:40:10.852 "prchk_guard": false, 00:40:10.852 "hdgst": false, 00:40:10.852 "ddgst": false, 00:40:10.852 "psk": "key1", 00:40:10.852 "allow_unrecognized_csi": false, 00:40:10.852 "method": "bdev_nvme_attach_controller", 00:40:10.852 "req_id": 1 00:40:10.852 } 00:40:10.852 Got JSON-RPC error response 00:40:10.852 response: 00:40:10.852 { 00:40:10.852 "code": -5, 00:40:10.852 "message": "Input/output error" 00:40:10.852 } 00:40:10.852 21:33:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:10.852 21:33:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:10.852 21:33:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:10.852 21:33:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:10.852 21:33:12 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:40:10.852 21:33:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:10.852 21:33:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:10.852 21:33:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:10.852 21:33:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:10.852 21:33:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:11.114 21:33:12 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:11.114 21:33:12 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:40:11.114 21:33:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:11.114 21:33:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:11.114 21:33:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:11.114 21:33:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:11.114 21:33:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:11.114 21:33:12 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:40:11.114 21:33:12 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:40:11.114 21:33:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:11.375 21:33:12 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:40:11.375 21:33:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:11.636 21:33:12 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:40:11.636 21:33:12 keyring_file -- keyring/file.sh@78 -- # jq length 00:40:11.636 21:33:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:11.636 21:33:13 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:40:11.636 21:33:13 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.WJu6RtWJP3 00:40:11.636 21:33:13 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.WJu6RtWJP3 00:40:11.636 21:33:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:11.636 21:33:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.WJu6RtWJP3 00:40:11.636 21:33:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:11.636 21:33:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:11.636 21:33:13 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:11.636 21:33:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:11.636 21:33:13 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WJu6RtWJP3 00:40:11.636 21:33:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WJu6RtWJP3 00:40:11.899 [2024-12-05 21:33:13.165155] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.WJu6RtWJP3': 0100660 00:40:11.899 [2024-12-05 21:33:13.165176] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:11.899 request: 00:40:11.899 { 00:40:11.899 "name": "key0", 00:40:11.899 "path": "/tmp/tmp.WJu6RtWJP3", 00:40:11.899 "method": "keyring_file_add_key", 00:40:11.899 "req_id": 1 00:40:11.899 } 00:40:11.899 Got JSON-RPC error response 00:40:11.899 response: 00:40:11.899 { 00:40:11.899 "code": -1, 00:40:11.899 "message": "Operation not permitted" 00:40:11.899 } 00:40:11.899 21:33:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:11.899 21:33:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:11.899 21:33:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:11.899 21:33:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:11.899 21:33:13 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.WJu6RtWJP3 00:40:11.899 21:33:13 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WJu6RtWJP3 00:40:11.899 21:33:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WJu6RtWJP3 00:40:12.161 21:33:13 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.WJu6RtWJP3 00:40:12.161 21:33:13 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:40:12.161 21:33:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:12.161 21:33:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:12.161 21:33:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:12.161 21:33:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:12.161 21:33:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:12.161 21:33:13 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:40:12.161 21:33:13 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:12.161 21:33:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:12.161 21:33:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:12.161 21:33:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:12.161 21:33:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:12.161 21:33:13 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:12.161 21:33:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:12.161 21:33:13 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:12.161 21:33:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:12.423 [2024-12-05 21:33:13.690488] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.WJu6RtWJP3': No such file or directory 00:40:12.423 [2024-12-05 21:33:13.690500] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:12.423 [2024-12-05 21:33:13.690513] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:12.423 [2024-12-05 21:33:13.690518] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:40:12.423 [2024-12-05 21:33:13.690524] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:12.423 [2024-12-05 21:33:13.690533] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:12.423 request: 00:40:12.423 { 00:40:12.423 "name": "nvme0", 00:40:12.423 "trtype": "tcp", 00:40:12.423 "traddr": "127.0.0.1", 00:40:12.423 "adrfam": "ipv4", 00:40:12.423 "trsvcid": "4420", 00:40:12.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:12.423 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:12.423 "prchk_reftag": false, 00:40:12.423 "prchk_guard": false, 00:40:12.423 "hdgst": false, 00:40:12.423 "ddgst": false, 00:40:12.423 "psk": "key0", 00:40:12.424 "allow_unrecognized_csi": false, 00:40:12.424 "method": "bdev_nvme_attach_controller", 00:40:12.424 "req_id": 1 00:40:12.424 } 00:40:12.424 Got JSON-RPC error response 00:40:12.424 response: 00:40:12.424 { 00:40:12.424 "code": -19, 00:40:12.424 "message": "No such device" 00:40:12.424 } 00:40:12.424 21:33:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:12.424 21:33:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:12.424 21:33:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:12.424 21:33:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:12.424 21:33:13 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:40:12.424 21:33:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:12.686 21:33:13 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:12.686 21:33:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:12.686 21:33:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:12.686 21:33:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:12.686 21:33:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:12.686 21:33:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:12.686 21:33:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7tM9WqASoZ 00:40:12.686 21:33:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:12.686 21:33:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:12.686 21:33:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:12.686 21:33:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:12.686 21:33:13 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:12.686 21:33:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:12.686 21:33:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:12.686 21:33:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7tM9WqASoZ 00:40:12.686 21:33:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7tM9WqASoZ 00:40:12.686 21:33:13 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.7tM9WqASoZ 00:40:12.686 21:33:13 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7tM9WqASoZ 00:40:12.686 21:33:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7tM9WqASoZ 00:40:12.686 21:33:14 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:12.686 21:33:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:12.947 nvme0n1 00:40:12.947 21:33:14 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:40:12.947 21:33:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:12.947 21:33:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:12.947 21:33:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:12.947 21:33:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:12.947 21:33:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.209 21:33:14 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:40:13.209 21:33:14 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:40:13.209 21:33:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:13.471 21:33:14 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:40:13.471 21:33:14 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:40:13.471 21:33:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:13.471 21:33:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:13.471 21:33:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.471 21:33:14 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:40:13.471 21:33:14 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:40:13.471 21:33:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:13.471 21:33:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:13.471 21:33:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:13.471 21:33:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.471 21:33:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:13.732 21:33:15 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:40:13.732 21:33:15 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:13.732 21:33:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:13.999 21:33:15 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:40:13.999 21:33:15 keyring_file -- keyring/file.sh@105 -- # jq length 00:40:13.999 21:33:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.999 21:33:15 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:40:13.999 21:33:15 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7tM9WqASoZ 00:40:13.999 21:33:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7tM9WqASoZ 00:40:14.259 21:33:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.n2rVOcBtWm 00:40:14.259 21:33:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.n2rVOcBtWm 00:40:14.259 21:33:15 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:14.259 21:33:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:14.519 nvme0n1 00:40:14.519 21:33:15 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:40:14.519 21:33:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:14.795 21:33:16 keyring_file -- keyring/file.sh@113 -- # config='{ 00:40:14.795 "subsystems": [ 00:40:14.795 { 00:40:14.795 "subsystem": "keyring", 00:40:14.795 "config": [ 00:40:14.795 { 00:40:14.795 "method": "keyring_file_add_key", 00:40:14.795 "params": { 00:40:14.795 "name": "key0", 00:40:14.795 "path": "/tmp/tmp.7tM9WqASoZ" 00:40:14.795 } 00:40:14.795 }, 00:40:14.795 { 00:40:14.795 "method": "keyring_file_add_key", 00:40:14.795 "params": { 00:40:14.795 "name": "key1", 00:40:14.795 "path": "/tmp/tmp.n2rVOcBtWm" 00:40:14.795 } 00:40:14.795 } 00:40:14.795 ] 00:40:14.795 }, 00:40:14.795 { 00:40:14.795 "subsystem": "iobuf", 00:40:14.795 "config": [ 00:40:14.795 { 00:40:14.795 "method": "iobuf_set_options", 00:40:14.795 "params": { 00:40:14.795 "small_pool_count": 8192, 00:40:14.795 "large_pool_count": 1024, 00:40:14.795 "small_bufsize": 8192, 00:40:14.795 "large_bufsize": 135168, 00:40:14.795 "enable_numa": false 00:40:14.795 } 00:40:14.795 } 00:40:14.795 ] 00:40:14.795 }, 00:40:14.795 { 00:40:14.795 "subsystem": "sock", 00:40:14.795 "config": [ 00:40:14.795 { 00:40:14.795 "method": "sock_set_default_impl", 00:40:14.795 "params": { 00:40:14.795 "impl_name": "posix" 00:40:14.795 } 00:40:14.795 }, 00:40:14.795 { 00:40:14.795 "method": "sock_impl_set_options", 00:40:14.795 "params": { 00:40:14.795 "impl_name": "ssl", 00:40:14.795 "recv_buf_size": 4096, 00:40:14.795 "send_buf_size": 4096, 00:40:14.795 "enable_recv_pipe": true, 00:40:14.795 "enable_quickack": false, 00:40:14.795 "enable_placement_id": 0, 00:40:14.795 "enable_zerocopy_send_server": true, 00:40:14.795 "enable_zerocopy_send_client": false, 00:40:14.795 "zerocopy_threshold": 0, 00:40:14.795 "tls_version": 0, 00:40:14.795 "enable_ktls": false 00:40:14.795 } 00:40:14.795 }, 00:40:14.795 { 00:40:14.795 "method": "sock_impl_set_options", 00:40:14.795 "params": { 00:40:14.795 "impl_name": "posix", 00:40:14.795 "recv_buf_size": 2097152, 00:40:14.795 "send_buf_size": 2097152, 00:40:14.795 "enable_recv_pipe": true, 00:40:14.795 "enable_quickack": false, 00:40:14.795 "enable_placement_id": 0, 00:40:14.795 "enable_zerocopy_send_server": true, 00:40:14.795 "enable_zerocopy_send_client": false, 00:40:14.795 "zerocopy_threshold": 0, 00:40:14.795 "tls_version": 0, 00:40:14.795 "enable_ktls": false 00:40:14.795 } 00:40:14.795 } 00:40:14.795 ] 00:40:14.795 }, 00:40:14.795 { 00:40:14.795 "subsystem": "vmd", 00:40:14.796 "config": [] 00:40:14.796 }, 00:40:14.796 { 00:40:14.796 "subsystem": "accel", 00:40:14.796 "config": [ 00:40:14.796 { 00:40:14.796 "method": "accel_set_options", 00:40:14.796 "params": { 00:40:14.796 "small_cache_size": 128, 00:40:14.796 "large_cache_size": 16, 00:40:14.796 "task_count": 2048, 00:40:14.796 "sequence_count": 2048, 00:40:14.796 "buf_count": 2048 00:40:14.796 } 00:40:14.796 } 00:40:14.796 ] 00:40:14.796 }, 00:40:14.796 { 00:40:14.796 "subsystem": "bdev", 00:40:14.796 "config": [ 00:40:14.796 { 00:40:14.796 "method": "bdev_set_options", 00:40:14.796 "params": { 00:40:14.796 "bdev_io_pool_size": 65535, 00:40:14.796 "bdev_io_cache_size": 256, 00:40:14.796 "bdev_auto_examine": true, 00:40:14.796 "iobuf_small_cache_size": 128, 00:40:14.796 "iobuf_large_cache_size": 16 00:40:14.796 } 00:40:14.796 }, 00:40:14.796 { 00:40:14.796 "method": "bdev_raid_set_options", 00:40:14.796 "params": { 00:40:14.796 "process_window_size_kb": 1024, 00:40:14.796 "process_max_bandwidth_mb_sec": 0 00:40:14.796 } 00:40:14.796 }, 00:40:14.796 { 00:40:14.796 "method": "bdev_iscsi_set_options", 00:40:14.796 "params": { 00:40:14.796 "timeout_sec": 30 00:40:14.796 } 00:40:14.796 }, 00:40:14.796 { 00:40:14.796 "method": "bdev_nvme_set_options", 00:40:14.796 "params": { 00:40:14.796 "action_on_timeout": "none", 00:40:14.796 "timeout_us": 0, 00:40:14.796 "timeout_admin_us": 0, 00:40:14.796 "keep_alive_timeout_ms": 10000, 00:40:14.796 "arbitration_burst": 0, 00:40:14.796 "low_priority_weight": 0, 00:40:14.796 "medium_priority_weight": 0, 00:40:14.796 "high_priority_weight": 0, 00:40:14.796 "nvme_adminq_poll_period_us": 10000, 00:40:14.796 "nvme_ioq_poll_period_us": 0, 00:40:14.796 "io_queue_requests": 512, 00:40:14.796 "delay_cmd_submit": true, 00:40:14.796 "transport_retry_count": 4, 00:40:14.796 "bdev_retry_count": 3, 00:40:14.796 "transport_ack_timeout": 0, 00:40:14.796 "ctrlr_loss_timeout_sec": 0, 00:40:14.796 "reconnect_delay_sec": 0, 00:40:14.796 "fast_io_fail_timeout_sec": 0, 00:40:14.796 "disable_auto_failback": false, 00:40:14.796 "generate_uuids": false, 00:40:14.796 "transport_tos": 0, 00:40:14.796 "nvme_error_stat": false, 00:40:14.796 "rdma_srq_size": 0, 00:40:14.796 "io_path_stat": false, 00:40:14.796 "allow_accel_sequence": false, 00:40:14.796 "rdma_max_cq_size": 0, 00:40:14.796 "rdma_cm_event_timeout_ms": 0, 00:40:14.796 "dhchap_digests": [ 00:40:14.796 "sha256", 00:40:14.796 "sha384", 00:40:14.796 "sha512" 00:40:14.796 ], 00:40:14.796 "dhchap_dhgroups": [ 00:40:14.796 "null", 00:40:14.796 "ffdhe2048", 00:40:14.796 "ffdhe3072", 00:40:14.796 "ffdhe4096", 00:40:14.796 "ffdhe6144", 00:40:14.796 "ffdhe8192" 00:40:14.796 ] 00:40:14.796 } 00:40:14.796 }, 00:40:14.796 { 00:40:14.796 "method": "bdev_nvme_attach_controller", 00:40:14.796 "params": { 00:40:14.796 "name": "nvme0", 00:40:14.796 "trtype": "TCP", 00:40:14.796 "adrfam": "IPv4", 00:40:14.796 "traddr": "127.0.0.1", 00:40:14.796 "trsvcid": "4420", 00:40:14.796 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:14.796 "prchk_reftag": false, 00:40:14.796 "prchk_guard": false, 00:40:14.796 "ctrlr_loss_timeout_sec": 0, 00:40:14.796 "reconnect_delay_sec": 0, 00:40:14.796 "fast_io_fail_timeout_sec": 0, 00:40:14.796 "psk": "key0", 00:40:14.796 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:14.796 "hdgst": false, 00:40:14.796 "ddgst": false, 00:40:14.796 "multipath": "multipath" 00:40:14.796 } 00:40:14.796 }, 00:40:14.796 { 00:40:14.796 "method": "bdev_nvme_set_hotplug", 00:40:14.796 "params": { 00:40:14.796 "period_us": 100000, 00:40:14.796 "enable": false 00:40:14.796 } 00:40:14.796 }, 00:40:14.796 { 00:40:14.796 "method": "bdev_wait_for_examine" 00:40:14.796 } 00:40:14.796 ] 00:40:14.796 }, 00:40:14.796 { 00:40:14.796 "subsystem": "nbd", 00:40:14.796 "config": [] 00:40:14.796 } 00:40:14.796 ] 00:40:14.796 }' 00:40:14.796 21:33:16 keyring_file -- keyring/file.sh@115 -- # killprocess 2455501 00:40:14.796 21:33:16 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2455501 ']' 00:40:14.796 21:33:16 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2455501 00:40:14.796 21:33:16 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:14.796 21:33:16 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:14.796 21:33:16 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2455501 00:40:15.057 21:33:16 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:15.057 21:33:16 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:15.057 21:33:16 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2455501' 00:40:15.057 killing process with pid 2455501 00:40:15.057 21:33:16 keyring_file -- common/autotest_common.sh@973 -- # kill 2455501 00:40:15.057 Received shutdown signal, test time was about 1.000000 seconds 00:40:15.057 00:40:15.057 Latency(us) 00:40:15.057 [2024-12-05T20:33:16.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:15.057 [2024-12-05T20:33:16.494Z] =================================================================================================================== 00:40:15.057 [2024-12-05T20:33:16.494Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:15.057 21:33:16 keyring_file -- common/autotest_common.sh@978 -- # wait 2455501 00:40:15.057 21:33:16 keyring_file -- keyring/file.sh@118 -- # bperfpid=2457679 00:40:15.057 21:33:16 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2457679 /var/tmp/bperf.sock 00:40:15.057 21:33:16 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2457679 ']' 00:40:15.057 21:33:16 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:15.057 21:33:16 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:15.057 21:33:16 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:15.057 21:33:16 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:15.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:15.057 21:33:16 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:15.057 21:33:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:15.057 21:33:16 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:40:15.057 "subsystems": [ 00:40:15.057 { 00:40:15.057 "subsystem": "keyring", 00:40:15.057 "config": [ 00:40:15.057 { 00:40:15.057 "method": "keyring_file_add_key", 00:40:15.057 "params": { 00:40:15.057 "name": "key0", 00:40:15.057 "path": "/tmp/tmp.7tM9WqASoZ" 00:40:15.057 } 00:40:15.057 }, 00:40:15.057 { 00:40:15.057 "method": "keyring_file_add_key", 00:40:15.057 "params": { 00:40:15.057 "name": "key1", 00:40:15.057 "path": "/tmp/tmp.n2rVOcBtWm" 00:40:15.057 } 00:40:15.057 } 00:40:15.057 ] 00:40:15.057 }, 00:40:15.057 { 00:40:15.057 "subsystem": "iobuf", 00:40:15.057 "config": [ 00:40:15.057 { 00:40:15.057 "method": "iobuf_set_options", 00:40:15.057 "params": { 00:40:15.057 "small_pool_count": 8192, 00:40:15.057 "large_pool_count": 1024, 00:40:15.057 "small_bufsize": 8192, 00:40:15.057 "large_bufsize": 135168, 00:40:15.057 "enable_numa": false 00:40:15.057 } 00:40:15.057 } 00:40:15.057 ] 00:40:15.057 }, 00:40:15.057 { 00:40:15.057 "subsystem": "sock", 00:40:15.057 "config": [ 00:40:15.057 { 00:40:15.057 "method": "sock_set_default_impl", 00:40:15.057 "params": { 00:40:15.057 "impl_name": "posix" 00:40:15.057 } 00:40:15.057 }, 00:40:15.057 { 00:40:15.057 "method": "sock_impl_set_options", 00:40:15.057 "params": { 00:40:15.058 "impl_name": "ssl", 00:40:15.058 "recv_buf_size": 4096, 00:40:15.058 "send_buf_size": 4096, 00:40:15.058 "enable_recv_pipe": true, 00:40:15.058 "enable_quickack": false, 00:40:15.058 "enable_placement_id": 0, 00:40:15.058 "enable_zerocopy_send_server": true, 00:40:15.058 "enable_zerocopy_send_client": false, 00:40:15.058 "zerocopy_threshold": 0, 00:40:15.058 "tls_version": 0, 00:40:15.058 "enable_ktls": false 00:40:15.058 } 00:40:15.058 }, 00:40:15.058 { 00:40:15.058 "method": "sock_impl_set_options", 00:40:15.058 "params": { 00:40:15.058 "impl_name": "posix", 00:40:15.058 "recv_buf_size": 2097152, 00:40:15.058 "send_buf_size": 2097152, 00:40:15.058 "enable_recv_pipe": true, 00:40:15.058 "enable_quickack": false, 00:40:15.058 "enable_placement_id": 0, 00:40:15.058 "enable_zerocopy_send_server": true, 00:40:15.058 "enable_zerocopy_send_client": false, 00:40:15.058 "zerocopy_threshold": 0, 00:40:15.058 "tls_version": 0, 00:40:15.058 "enable_ktls": false 00:40:15.058 } 00:40:15.058 } 00:40:15.058 ] 00:40:15.058 }, 00:40:15.058 { 00:40:15.058 "subsystem": "vmd", 00:40:15.058 "config": [] 00:40:15.058 }, 00:40:15.058 { 00:40:15.058 "subsystem": "accel", 00:40:15.058 "config": [ 00:40:15.058 { 00:40:15.058 "method": "accel_set_options", 00:40:15.058 "params": { 00:40:15.058 "small_cache_size": 128, 00:40:15.058 "large_cache_size": 16, 00:40:15.058 "task_count": 2048, 00:40:15.058 "sequence_count": 2048, 00:40:15.058 "buf_count": 2048 00:40:15.058 } 00:40:15.058 } 00:40:15.058 ] 00:40:15.058 }, 00:40:15.058 { 00:40:15.058 "subsystem": "bdev", 00:40:15.058 "config": [ 00:40:15.058 { 00:40:15.058 "method": "bdev_set_options", 00:40:15.058 "params": { 00:40:15.058 "bdev_io_pool_size": 65535, 00:40:15.058 "bdev_io_cache_size": 256, 00:40:15.058 "bdev_auto_examine": true, 00:40:15.058 "iobuf_small_cache_size": 128, 00:40:15.058 "iobuf_large_cache_size": 16 00:40:15.058 } 00:40:15.058 }, 00:40:15.058 { 00:40:15.058 "method": "bdev_raid_set_options", 00:40:15.058 "params": { 00:40:15.058 "process_window_size_kb": 1024, 00:40:15.058 "process_max_bandwidth_mb_sec": 0 00:40:15.058 } 00:40:15.058 }, 00:40:15.058 { 00:40:15.058 "method": "bdev_iscsi_set_options", 00:40:15.058 "params": { 00:40:15.058 "timeout_sec": 30 00:40:15.058 } 00:40:15.058 }, 00:40:15.058 { 00:40:15.058 "method": "bdev_nvme_set_options", 00:40:15.058 "params": { 00:40:15.058 "action_on_timeout": "none", 00:40:15.058 "timeout_us": 0, 00:40:15.058 "timeout_admin_us": 0, 00:40:15.058 "keep_alive_timeout_ms": 10000, 00:40:15.058 "arbitration_burst": 0, 00:40:15.058 "low_priority_weight": 0, 00:40:15.058 "medium_priority_weight": 0, 00:40:15.058 "high_priority_weight": 0, 00:40:15.058 "nvme_adminq_poll_period_us": 10000, 00:40:15.058 "nvme_ioq_poll_period_us": 0, 00:40:15.058 "io_queue_requests": 512, 00:40:15.058 "delay_cmd_submit": true, 00:40:15.058 "transport_retry_count": 4, 00:40:15.058 "bdev_retry_count": 3, 00:40:15.058 "transport_ack_timeout": 0, 00:40:15.058 "ctrlr_loss_timeout_sec": 0, 00:40:15.058 "reconnect_delay_sec": 0, 00:40:15.058 "fast_io_fail_timeout_sec": 0, 00:40:15.058 "disable_auto_failback": false, 00:40:15.058 "generate_uuids": false, 00:40:15.058 "transport_tos": 0, 00:40:15.058 "nvme_error_stat": false, 00:40:15.058 "rdma_srq_size": 0, 00:40:15.058 "io_path_stat": false, 00:40:15.058 "allow_accel_sequence": false, 00:40:15.058 "rdma_max_cq_size": 0, 00:40:15.058 "rdma_cm_event_timeout_ms": 0, 00:40:15.058 "dhchap_digests": [ 00:40:15.058 "sha256", 00:40:15.058 "sha384", 00:40:15.058 "sha512" 00:40:15.058 ], 00:40:15.058 "dhchap_dhgroups": [ 00:40:15.058 "null", 00:40:15.058 "ffdhe2048", 00:40:15.058 "ffdhe3072", 00:40:15.058 "ffdhe4096", 00:40:15.058 "ffdhe6144", 00:40:15.058 "ffdhe8192" 00:40:15.058 ] 00:40:15.058 } 00:40:15.058 }, 00:40:15.058 { 00:40:15.058 "method": "bdev_nvme_attach_controller", 00:40:15.058 "params": { 00:40:15.058 "name": "nvme0", 00:40:15.058 "trtype": "TCP", 00:40:15.058 "adrfam": "IPv4", 00:40:15.058 "traddr": "127.0.0.1", 00:40:15.058 "trsvcid": "4420", 00:40:15.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:15.058 "prchk_reftag": false, 00:40:15.058 "prchk_guard": false, 00:40:15.058 "ctrlr_loss_timeout_sec": 0, 00:40:15.058 "reconnect_delay_sec": 0, 00:40:15.058 "fast_io_fail_timeout_sec": 0, 00:40:15.058 "psk": "key0", 00:40:15.058 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:15.058 "hdgst": false, 00:40:15.058 "ddgst": false, 00:40:15.058 "multipath": "multipath" 00:40:15.058 } 00:40:15.058 }, 00:40:15.058 { 00:40:15.058 "method": "bdev_nvme_set_hotplug", 00:40:15.058 "params": { 00:40:15.058 "period_us": 100000, 00:40:15.058 "enable": false 00:40:15.058 } 00:40:15.058 }, 00:40:15.058 { 00:40:15.058 "method": "bdev_wait_for_examine" 00:40:15.058 } 00:40:15.058 ] 00:40:15.058 }, 00:40:15.058 { 00:40:15.058 "subsystem": "nbd", 00:40:15.058 "config": [] 00:40:15.058 } 00:40:15.058 ] 00:40:15.058 }' 00:40:15.058 [2024-12-05 21:33:16.385342] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:40:15.058 [2024-12-05 21:33:16.385401] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2457679 ] 00:40:15.058 [2024-12-05 21:33:16.475057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:15.318 [2024-12-05 21:33:16.504359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:15.318 [2024-12-05 21:33:16.648694] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:15.889 21:33:17 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:15.889 21:33:17 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:15.889 21:33:17 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:40:15.889 21:33:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:15.889 21:33:17 keyring_file -- keyring/file.sh@121 -- # jq length 00:40:16.150 21:33:17 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:16.150 21:33:17 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:40:16.150 21:33:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:16.150 21:33:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:16.150 21:33:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:16.150 21:33:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:16.150 21:33:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:16.150 21:33:17 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:40:16.150 21:33:17 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:40:16.150 21:33:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:16.150 21:33:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:16.150 21:33:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:16.150 21:33:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:16.150 21:33:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:16.410 21:33:17 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:40:16.410 21:33:17 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:40:16.410 21:33:17 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:40:16.410 21:33:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:16.670 21:33:17 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:40:16.671 21:33:17 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:16.671 21:33:17 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.7tM9WqASoZ /tmp/tmp.n2rVOcBtWm 00:40:16.671 21:33:17 keyring_file -- keyring/file.sh@20 -- # killprocess 2457679 00:40:16.671 21:33:17 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2457679 ']' 00:40:16.671 21:33:17 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2457679 00:40:16.671 21:33:17 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:16.671 21:33:17 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:16.671 21:33:17 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2457679 00:40:16.671 21:33:17 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:16.671 21:33:17 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:16.671 21:33:17 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2457679' 00:40:16.671 killing process with pid 2457679 00:40:16.671 21:33:17 keyring_file -- common/autotest_common.sh@973 -- # kill 2457679 00:40:16.671 Received shutdown signal, test time was about 1.000000 seconds 00:40:16.671 00:40:16.671 Latency(us) 00:40:16.671 [2024-12-05T20:33:18.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:16.671 [2024-12-05T20:33:18.108Z] =================================================================================================================== 00:40:16.671 [2024-12-05T20:33:18.108Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:16.671 21:33:17 keyring_file -- common/autotest_common.sh@978 -- # wait 2457679 00:40:16.671 21:33:18 keyring_file -- keyring/file.sh@21 -- # killprocess 2455269 00:40:16.671 21:33:18 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2455269 ']' 00:40:16.671 21:33:18 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2455269 00:40:16.671 21:33:18 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:16.671 21:33:18 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:16.671 21:33:18 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2455269 00:40:16.933 21:33:18 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:16.933 21:33:18 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:16.933 21:33:18 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2455269' 00:40:16.933 killing process with pid 2455269 00:40:16.933 21:33:18 keyring_file -- common/autotest_common.sh@973 -- # kill 2455269 00:40:16.933 21:33:18 keyring_file -- common/autotest_common.sh@978 -- # wait 2455269 00:40:16.933 00:40:16.933 real 0m11.816s 00:40:16.933 user 0m28.310s 00:40:16.933 sys 0m2.640s 00:40:16.933 21:33:18 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:16.933 21:33:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:16.933 ************************************ 00:40:16.933 END TEST keyring_file 00:40:16.933 ************************************ 00:40:17.195 21:33:18 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:40:17.195 21:33:18 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:17.195 21:33:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:17.195 21:33:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:17.195 21:33:18 -- common/autotest_common.sh@10 -- # set +x 00:40:17.195 ************************************ 00:40:17.195 START TEST keyring_linux 00:40:17.195 ************************************ 00:40:17.195 21:33:18 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:17.195 Joined session keyring: 637101843 00:40:17.195 * Looking for test storage... 00:40:17.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:17.195 21:33:18 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:17.195 21:33:18 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:17.195 21:33:18 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:40:17.195 21:33:18 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@345 -- # : 1 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:17.195 21:33:18 keyring_linux -- scripts/common.sh@368 -- # return 0 00:40:17.195 21:33:18 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:17.195 21:33:18 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:17.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.195 --rc genhtml_branch_coverage=1 00:40:17.195 --rc genhtml_function_coverage=1 00:40:17.195 --rc genhtml_legend=1 00:40:17.195 --rc geninfo_all_blocks=1 00:40:17.195 --rc geninfo_unexecuted_blocks=1 00:40:17.195 00:40:17.195 ' 00:40:17.195 21:33:18 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:17.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.195 --rc genhtml_branch_coverage=1 00:40:17.195 --rc genhtml_function_coverage=1 00:40:17.195 --rc genhtml_legend=1 00:40:17.195 --rc geninfo_all_blocks=1 00:40:17.196 --rc geninfo_unexecuted_blocks=1 00:40:17.196 00:40:17.196 ' 00:40:17.196 21:33:18 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:17.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.196 --rc genhtml_branch_coverage=1 00:40:17.196 --rc genhtml_function_coverage=1 00:40:17.196 --rc genhtml_legend=1 00:40:17.196 --rc geninfo_all_blocks=1 00:40:17.196 --rc geninfo_unexecuted_blocks=1 00:40:17.196 00:40:17.196 ' 00:40:17.196 21:33:18 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:17.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.196 --rc genhtml_branch_coverage=1 00:40:17.196 --rc genhtml_function_coverage=1 00:40:17.196 --rc genhtml_legend=1 00:40:17.196 --rc geninfo_all_blocks=1 00:40:17.196 --rc geninfo_unexecuted_blocks=1 00:40:17.196 00:40:17.196 ' 00:40:17.196 21:33:18 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:17.196 21:33:18 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:17.196 21:33:18 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:17.458 21:33:18 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:40:17.458 21:33:18 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:17.458 21:33:18 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:17.458 21:33:18 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:17.458 21:33:18 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.458 21:33:18 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.458 21:33:18 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.458 21:33:18 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:17.458 21:33:18 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:17.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:17.458 21:33:18 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:17.458 21:33:18 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:17.459 21:33:18 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:17.459 21:33:18 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:17.459 21:33:18 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:17.459 21:33:18 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:17.459 21:33:18 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:17.459 21:33:18 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:17.459 21:33:18 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:17.459 21:33:18 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:17.459 21:33:18 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:17.459 21:33:18 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:17.459 21:33:18 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:17.459 21:33:18 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:17.459 /tmp/:spdk-test:key0 00:40:17.459 21:33:18 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:17.459 21:33:18 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:17.459 21:33:18 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:17.459 21:33:18 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:17.459 21:33:18 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:17.459 21:33:18 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:17.459 21:33:18 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:17.459 21:33:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:17.459 /tmp/:spdk-test:key1 00:40:17.459 21:33:18 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:17.459 21:33:18 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2458190 00:40:17.459 21:33:18 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2458190 00:40:17.459 21:33:18 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2458190 ']' 00:40:17.459 21:33:18 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:17.459 21:33:18 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:17.459 21:33:18 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:17.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:17.459 21:33:18 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:17.459 21:33:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:17.459 [2024-12-05 21:33:18.808088] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:40:17.459 [2024-12-05 21:33:18.808168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2458190 ] 00:40:17.459 [2024-12-05 21:33:18.891196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:17.719 [2024-12-05 21:33:18.932806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:18.292 21:33:19 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:18.292 21:33:19 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:40:18.292 21:33:19 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:18.292 21:33:19 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.292 21:33:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:18.292 [2024-12-05 21:33:19.615261] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:18.292 null0 00:40:18.292 [2024-12-05 21:33:19.647309] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:18.293 [2024-12-05 21:33:19.647722] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:18.293 21:33:19 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.293 21:33:19 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:18.293 269368535 00:40:18.293 21:33:19 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:18.293 398616446 00:40:18.293 21:33:19 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2458322 00:40:18.293 21:33:19 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2458322 /var/tmp/bperf.sock 00:40:18.293 21:33:19 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:18.293 21:33:19 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2458322 ']' 00:40:18.293 21:33:19 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:18.293 21:33:19 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:18.293 21:33:19 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:18.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:18.293 21:33:19 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:18.293 21:33:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:18.293 [2024-12-05 21:33:19.726008] Starting SPDK v25.01-pre git sha1 a333974e5 / DPDK 24.03.0 initialization... 00:40:18.293 [2024-12-05 21:33:19.726059] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2458322 ] 00:40:18.554 [2024-12-05 21:33:19.815055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:18.554 [2024-12-05 21:33:19.845147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:19.127 21:33:20 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:19.127 21:33:20 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:40:19.127 21:33:20 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:19.127 21:33:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:19.389 21:33:20 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:19.389 21:33:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:19.651 21:33:20 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:19.651 21:33:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:19.651 [2024-12-05 21:33:21.059380] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:19.912 nvme0n1 00:40:19.912 21:33:21 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:19.912 21:33:21 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:19.912 21:33:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:19.912 21:33:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:19.912 21:33:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:19.912 21:33:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:19.912 21:33:21 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:19.912 21:33:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:19.912 21:33:21 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:19.912 21:33:21 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:19.912 21:33:21 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:19.912 21:33:21 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:19.912 21:33:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:20.173 21:33:21 keyring_linux -- keyring/linux.sh@25 -- # sn=269368535 00:40:20.173 21:33:21 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:20.173 21:33:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:20.173 21:33:21 keyring_linux -- keyring/linux.sh@26 -- # [[ 269368535 == \2\6\9\3\6\8\5\3\5 ]] 00:40:20.173 21:33:21 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 269368535 00:40:20.173 21:33:21 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:20.173 21:33:21 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:20.173 Running I/O for 1 seconds... 00:40:21.561 16070.00 IOPS, 62.77 MiB/s 00:40:21.561 Latency(us) 00:40:21.561 [2024-12-05T20:33:22.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:21.561 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:21.561 nvme0n1 : 1.01 16073.05 62.79 0.00 0.00 7930.06 6444.37 15182.51 00:40:21.561 [2024-12-05T20:33:22.998Z] =================================================================================================================== 00:40:21.561 [2024-12-05T20:33:22.998Z] Total : 16073.05 62.79 0.00 0.00 7930.06 6444.37 15182.51 00:40:21.561 { 00:40:21.561 "results": [ 00:40:21.561 { 00:40:21.561 "job": "nvme0n1", 00:40:21.561 "core_mask": "0x2", 00:40:21.561 "workload": "randread", 00:40:21.561 "status": "finished", 00:40:21.561 "queue_depth": 128, 00:40:21.561 "io_size": 4096, 00:40:21.561 "runtime": 1.007836, 00:40:21.561 "iops": 16073.051567913828, 00:40:21.561 "mibps": 62.78535768716339, 00:40:21.561 "io_failed": 0, 00:40:21.561 "io_timeout": 0, 00:40:21.561 "avg_latency_us": 7930.064858324588, 00:40:21.561 "min_latency_us": 6444.373333333333, 00:40:21.561 "max_latency_us": 15182.506666666666 00:40:21.561 } 00:40:21.561 ], 00:40:21.561 "core_count": 1 00:40:21.561 } 00:40:21.561 21:33:22 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:21.561 21:33:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:21.561 21:33:22 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:21.561 21:33:22 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:21.561 21:33:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:21.561 21:33:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:21.561 21:33:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:21.561 21:33:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:21.823 21:33:22 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:21.823 21:33:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:21.823 21:33:22 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:21.823 21:33:22 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:21.823 21:33:22 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:40:21.823 21:33:23 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:21.823 21:33:23 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:21.823 21:33:23 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:21.823 21:33:23 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:21.823 21:33:23 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:21.823 21:33:23 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:21.823 21:33:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:21.823 [2024-12-05 21:33:23.158488] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:21.823 [2024-12-05 21:33:23.159243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131d3e0 (107): Transport endpoint is not connected 00:40:21.823 [2024-12-05 21:33:23.160240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131d3e0 (9): Bad file descriptor 00:40:21.823 [2024-12-05 21:33:23.161241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:21.823 [2024-12-05 21:33:23.161249] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:21.823 [2024-12-05 21:33:23.161255] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:21.823 [2024-12-05 21:33:23.161261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:21.823 request: 00:40:21.823 { 00:40:21.823 "name": "nvme0", 00:40:21.823 "trtype": "tcp", 00:40:21.823 "traddr": "127.0.0.1", 00:40:21.823 "adrfam": "ipv4", 00:40:21.823 "trsvcid": "4420", 00:40:21.823 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:21.823 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:21.823 "prchk_reftag": false, 00:40:21.823 "prchk_guard": false, 00:40:21.823 "hdgst": false, 00:40:21.823 "ddgst": false, 00:40:21.823 "psk": ":spdk-test:key1", 00:40:21.823 "allow_unrecognized_csi": false, 00:40:21.823 "method": "bdev_nvme_attach_controller", 00:40:21.823 "req_id": 1 00:40:21.823 } 00:40:21.823 Got JSON-RPC error response 00:40:21.823 response: 00:40:21.823 { 00:40:21.823 "code": -5, 00:40:21.823 "message": "Input/output error" 00:40:21.823 } 00:40:21.823 21:33:23 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:40:21.823 21:33:23 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:21.823 21:33:23 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:21.823 21:33:23 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@33 -- # sn=269368535 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 269368535 00:40:21.823 1 links removed 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@33 -- # sn=398616446 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 398616446 00:40:21.823 1 links removed 00:40:21.823 21:33:23 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2458322 00:40:21.823 21:33:23 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2458322 ']' 00:40:21.823 21:33:23 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2458322 00:40:21.823 21:33:23 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:21.824 21:33:23 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:21.824 21:33:23 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2458322 00:40:22.086 21:33:23 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:22.086 21:33:23 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:22.086 21:33:23 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2458322' 00:40:22.086 killing process with pid 2458322 00:40:22.086 21:33:23 keyring_linux -- common/autotest_common.sh@973 -- # kill 2458322 00:40:22.086 Received shutdown signal, test time was about 1.000000 seconds 00:40:22.086 00:40:22.086 Latency(us) 00:40:22.086 [2024-12-05T20:33:23.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:22.086 [2024-12-05T20:33:23.523Z] =================================================================================================================== 00:40:22.086 [2024-12-05T20:33:23.523Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:22.086 21:33:23 keyring_linux -- common/autotest_common.sh@978 -- # wait 2458322 00:40:22.086 21:33:23 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2458190 00:40:22.086 21:33:23 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2458190 ']' 00:40:22.086 21:33:23 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2458190 00:40:22.086 21:33:23 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:22.086 21:33:23 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:22.086 21:33:23 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2458190 00:40:22.086 21:33:23 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:22.086 21:33:23 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:22.086 21:33:23 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2458190' 00:40:22.086 killing process with pid 2458190 00:40:22.086 21:33:23 keyring_linux -- common/autotest_common.sh@973 -- # kill 2458190 00:40:22.086 21:33:23 keyring_linux -- common/autotest_common.sh@978 -- # wait 2458190 00:40:22.347 00:40:22.347 real 0m5.227s 00:40:22.347 user 0m9.616s 00:40:22.347 sys 0m1.439s 00:40:22.347 21:33:23 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:22.347 21:33:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:22.347 ************************************ 00:40:22.347 END TEST keyring_linux 00:40:22.347 ************************************ 00:40:22.347 21:33:23 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:40:22.347 21:33:23 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:40:22.347 21:33:23 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:40:22.347 21:33:23 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:40:22.347 21:33:23 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:40:22.347 21:33:23 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:40:22.347 21:33:23 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:40:22.347 21:33:23 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:22.347 21:33:23 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:40:22.347 21:33:23 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:22.347 21:33:23 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:40:22.347 21:33:23 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:22.347 21:33:23 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:22.347 21:33:23 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:40:22.347 21:33:23 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:40:22.347 21:33:23 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:40:22.347 21:33:23 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:40:22.347 21:33:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:22.347 21:33:23 -- common/autotest_common.sh@10 -- # set +x 00:40:22.347 21:33:23 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:40:22.347 21:33:23 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:40:22.347 21:33:23 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:40:22.347 21:33:23 -- common/autotest_common.sh@10 -- # set +x 00:40:30.575 INFO: APP EXITING 00:40:30.575 INFO: killing all VMs 00:40:30.575 INFO: killing vhost app 00:40:30.575 INFO: EXIT DONE 00:40:33.871 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:40:33.871 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:40:33.871 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:40:33.871 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:40:33.871 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:40:33.871 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:40:33.871 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:40:33.871 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:40:33.871 0000:65:00.0 (144d a80a): Already using the nvme driver 00:40:33.871 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:40:33.871 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:40:33.871 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:40:33.871 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:40:33.871 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:40:33.871 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:40:33.871 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:40:33.871 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:40:38.078 Cleaning 00:40:38.078 Removing: /var/run/dpdk/spdk0/config 00:40:38.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:38.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:38.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:38.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:38.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:38.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:38.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:38.078 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:38.078 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:38.078 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:38.078 Removing: /var/run/dpdk/spdk1/config 00:40:38.078 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:38.078 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:38.078 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:38.078 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:38.078 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:38.078 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:38.078 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:38.078 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:38.078 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:38.078 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:38.078 Removing: /var/run/dpdk/spdk2/config 00:40:38.078 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:38.078 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:38.078 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:38.078 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:38.078 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:38.078 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:38.078 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:38.078 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:38.078 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:38.078 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:38.078 Removing: /var/run/dpdk/spdk3/config 00:40:38.078 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:38.078 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:38.078 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:38.078 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:38.078 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:38.078 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:38.078 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:38.078 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:38.078 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:38.078 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:38.078 Removing: /var/run/dpdk/spdk4/config 00:40:38.078 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:38.078 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:38.078 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:38.078 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:38.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:38.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:38.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:38.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:38.079 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:38.340 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:38.340 Removing: /dev/shm/bdev_svc_trace.1 00:40:38.340 Removing: /dev/shm/nvmf_trace.0 00:40:38.340 Removing: /dev/shm/spdk_tgt_trace.pid1842430 00:40:38.340 Removing: /var/run/dpdk/spdk0 00:40:38.340 Removing: /var/run/dpdk/spdk1 00:40:38.340 Removing: /var/run/dpdk/spdk2 00:40:38.340 Removing: /var/run/dpdk/spdk3 00:40:38.340 Removing: /var/run/dpdk/spdk4 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1840938 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1842430 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1843277 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1844316 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1844403 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1845704 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1845738 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1846192 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1847329 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1847822 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1848202 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1848594 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1849006 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1849405 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1849762 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1850084 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1850341 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1851570 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1854845 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1855213 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1855572 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1855905 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1856278 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1856466 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1856989 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1857011 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1857368 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1857677 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1857744 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1858076 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1858522 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1858873 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1859255 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1864205 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1869922 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1883160 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1883846 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1889604 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1889960 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1895702 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1903441 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1906580 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1919840 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1932018 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1934628 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1935801 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1958307 00:40:38.340 Removing: /var/run/dpdk/spdk_pid1963627 00:40:38.340 Removing: /var/run/dpdk/spdk_pid2024950 00:40:38.340 Removing: /var/run/dpdk/spdk_pid2031714 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2039267 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2048381 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2048383 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2049389 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2050394 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2051398 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2052068 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2052099 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2052407 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2052599 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2052741 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2053747 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2054749 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2055757 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2056433 00:40:38.601 Removing: /var/run/dpdk/spdk_pid2056435 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2056776 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2058210 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2059622 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2070172 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2106782 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2112637 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2114547 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2116770 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2116903 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2117036 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2117246 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2117636 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2119885 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2120730 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2121370 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2123955 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2124516 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2125467 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2131523 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2138579 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2138580 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2138581 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2143872 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2155225 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2160058 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2167960 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2169461 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2171060 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2172713 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2178870 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2184806 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2190690 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2201099 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2201105 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2206843 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2207176 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2207378 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2207856 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2207863 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2213920 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2214745 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2220596 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2223702 00:40:38.602 Removing: /var/run/dpdk/spdk_pid2230832 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2237928 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2249074 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2258370 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2258408 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2283696 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2284482 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2285276 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2285967 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2286964 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2287701 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2288390 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2289067 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2294904 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2295235 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2303399 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2303715 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2310651 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2316306 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2328395 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2329236 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2334688 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2335062 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2340766 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2348048 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2351472 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2364657 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2376371 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2378375 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2379388 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2400495 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2405889 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2409513 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2417313 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2417324 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2424085 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2426509 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2428725 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2430224 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2432435 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2433956 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2444840 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2445505 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2446108 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2449205 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2449650 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2450267 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2455269 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2455501 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2457679 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2458190 00:40:38.864 Removing: /var/run/dpdk/spdk_pid2458322 00:40:38.864 Clean 00:40:39.126 21:33:40 -- common/autotest_common.sh@1453 -- # return 0 00:40:39.126 21:33:40 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:40:39.126 21:33:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:39.126 21:33:40 -- common/autotest_common.sh@10 -- # set +x 00:40:39.126 21:33:40 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:40:39.126 21:33:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:39.126 21:33:40 -- common/autotest_common.sh@10 -- # set +x 00:40:39.126 21:33:40 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:39.126 21:33:40 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:39.126 21:33:40 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:39.126 21:33:40 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:40:39.126 21:33:40 -- spdk/autotest.sh@398 -- # hostname 00:40:39.126 21:33:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:39.389 geninfo: WARNING: invalid characters removed from testname! 00:41:05.969 21:34:06 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:08.059 21:34:08 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:09.438 21:34:10 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:11.345 21:34:12 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:13.253 21:34:14 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:15.164 21:34:16 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:16.549 21:34:17 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:16.549 21:34:17 -- spdk/autorun.sh@1 -- $ timing_finish 00:41:16.549 21:34:17 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:41:16.549 21:34:17 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:16.549 21:34:17 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:41:16.549 21:34:17 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:16.549 + [[ -n 1755260 ]] 00:41:16.549 + sudo kill 1755260 00:41:16.559 [Pipeline] } 00:41:16.577 [Pipeline] // stage 00:41:16.584 [Pipeline] } 00:41:16.601 [Pipeline] // timeout 00:41:16.606 [Pipeline] } 00:41:16.623 [Pipeline] // catchError 00:41:16.628 [Pipeline] } 00:41:16.646 [Pipeline] // wrap 00:41:16.653 [Pipeline] } 00:41:16.668 [Pipeline] // catchError 00:41:16.676 [Pipeline] stage 00:41:16.678 [Pipeline] { (Epilogue) 00:41:16.691 [Pipeline] catchError 00:41:16.693 [Pipeline] { 00:41:16.707 [Pipeline] echo 00:41:16.709 Cleanup processes 00:41:16.715 [Pipeline] sh 00:41:17.002 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:17.002 2471905 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:17.018 [Pipeline] sh 00:41:17.307 ++ grep -v 'sudo pgrep' 00:41:17.307 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:17.307 ++ awk '{print $1}' 00:41:17.307 + sudo kill -9 00:41:17.307 + true 00:41:17.321 [Pipeline] sh 00:41:17.615 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:29.859 [Pipeline] sh 00:41:30.145 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:30.146 Artifacts sizes are good 00:41:30.160 [Pipeline] archiveArtifacts 00:41:30.167 Archiving artifacts 00:41:30.300 [Pipeline] sh 00:41:30.585 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:30.600 [Pipeline] cleanWs 00:41:30.609 [WS-CLEANUP] Deleting project workspace... 00:41:30.609 [WS-CLEANUP] Deferred wipeout is used... 00:41:30.617 [WS-CLEANUP] done 00:41:30.618 [Pipeline] } 00:41:30.635 [Pipeline] // catchError 00:41:30.646 [Pipeline] sh 00:41:30.931 + logger -p user.info -t JENKINS-CI 00:41:30.941 [Pipeline] } 00:41:30.955 [Pipeline] // stage 00:41:30.960 [Pipeline] } 00:41:30.973 [Pipeline] // node 00:41:30.978 [Pipeline] End of Pipeline 00:41:31.007 Finished: SUCCESS